00:00:00.001 Started by upstream project "autotest-per-patch" build number 132390 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.132 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.156 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.855 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.867 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.880 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.880 > git config core.sparsecheckout # timeout=10 00:00:06.890 > git read-tree -mu HEAD # timeout=10 00:00:06.906 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.928 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.929 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.029 [Pipeline] Start of Pipeline 00:00:07.045 [Pipeline] library 00:00:07.047 Loading library shm_lib@master 00:00:07.047 Library shm_lib@master is cached. Copying from home. 00:00:07.064 [Pipeline] node 00:00:07.076 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:07.078 [Pipeline] { 00:00:07.090 [Pipeline] catchError 00:00:07.092 [Pipeline] { 00:00:07.106 [Pipeline] wrap 00:00:07.115 [Pipeline] { 00:00:07.121 [Pipeline] stage 00:00:07.122 [Pipeline] { (Prologue) 00:00:07.141 [Pipeline] echo 00:00:07.143 Node: VM-host-SM38 00:00:07.151 [Pipeline] cleanWs 00:00:07.163 [WS-CLEANUP] Deleting project workspace... 00:00:07.163 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.170 [WS-CLEANUP] done 00:00:07.379 [Pipeline] setCustomBuildProperty 00:00:07.462 [Pipeline] httpRequest 00:00:07.811 [Pipeline] echo 00:00:07.812 Sorcerer 10.211.164.20 is alive 00:00:07.819 [Pipeline] retry 00:00:07.820 [Pipeline] { 00:00:07.831 [Pipeline] httpRequest 00:00:07.835 HttpMethod: GET 00:00:07.836 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.837 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.849 Response Code: HTTP/1.1 200 OK 00:00:07.850 Success: Status code 200 is in the accepted range: 200,404 00:00:07.850 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.512 [Pipeline] } 00:00:16.530 [Pipeline] // retry 00:00:16.538 [Pipeline] sh 00:00:16.827 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.845 [Pipeline] httpRequest 00:00:17.278 [Pipeline] echo 00:00:17.280 Sorcerer 10.211.164.20 is alive 00:00:17.290 [Pipeline] retry 00:00:17.292 [Pipeline] { 00:00:17.307 [Pipeline] httpRequest 00:00:17.313 HttpMethod: GET 00:00:17.314 URL: http://10.211.164.20/packages/spdk_bc5264bd50de072d3c4a6d17d6573e2b3229b6e0.tar.gz 00:00:17.314 Sending request to url: http://10.211.164.20/packages/spdk_bc5264bd50de072d3c4a6d17d6573e2b3229b6e0.tar.gz 00:00:17.330 Response Code: HTTP/1.1 200 OK 00:00:17.330 Success: Status code 200 is in the accepted range: 200,404 00:00:17.331 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_bc5264bd50de072d3c4a6d17d6573e2b3229b6e0.tar.gz 00:00:48.957 [Pipeline] } 00:00:48.976 [Pipeline] // retry 00:00:48.985 [Pipeline] sh 00:00:49.265 + tar --no-same-owner -xf spdk_bc5264bd50de072d3c4a6d17d6573e2b3229b6e0.tar.gz 00:00:51.807 [Pipeline] sh 00:00:52.084 + git -C spdk log --oneline -n5 00:00:52.085 bc5264bd5 nvme: Fix discovery loop when target has no entry 00:00:52.085 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:52.085 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:52.085 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:00:52.085 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:00:52.104 [Pipeline] writeFile 00:00:52.120 [Pipeline] sh 00:00:52.405 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:52.416 [Pipeline] sh 00:00:52.694 + cat autorun-spdk.conf 00:00:52.694 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.694 SPDK_TEST_NVME=1 00:00:52.694 SPDK_TEST_FTL=1 00:00:52.694 SPDK_TEST_ISAL=1 00:00:52.694 SPDK_RUN_ASAN=1 00:00:52.694 SPDK_RUN_UBSAN=1 00:00:52.694 SPDK_TEST_XNVME=1 00:00:52.694 SPDK_TEST_NVME_FDP=1 00:00:52.694 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.700 RUN_NIGHTLY=0 00:00:52.702 [Pipeline] } 00:00:52.718 [Pipeline] // stage 00:00:52.736 [Pipeline] stage 00:00:52.738 [Pipeline] { (Run VM) 00:00:52.753 [Pipeline] sh 00:00:53.031 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:53.031 + echo 'Start stage prepare_nvme.sh' 00:00:53.031 Start stage prepare_nvme.sh 00:00:53.031 + [[ -n 7 ]] 00:00:53.031 + disk_prefix=ex7 00:00:53.031 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:53.031 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:53.031 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:53.031 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.031 ++ SPDK_TEST_NVME=1 00:00:53.031 ++ SPDK_TEST_FTL=1 00:00:53.031 ++ SPDK_TEST_ISAL=1 00:00:53.031 ++ SPDK_RUN_ASAN=1 00:00:53.031 ++ SPDK_RUN_UBSAN=1 00:00:53.031 ++ SPDK_TEST_XNVME=1 00:00:53.031 ++ SPDK_TEST_NVME_FDP=1 00:00:53.031 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:53.031 ++ RUN_NIGHTLY=0 00:00:53.031 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:53.031 + nvme_files=() 00:00:53.031 + declare -A nvme_files 00:00:53.031 + backend_dir=/var/lib/libvirt/images/backends 00:00:53.031 + nvme_files['nvme.img']=5G 00:00:53.031 + nvme_files['nvme-cmb.img']=5G 00:00:53.031 + nvme_files['nvme-multi0.img']=4G 00:00:53.031 + nvme_files['nvme-multi1.img']=4G 00:00:53.031 + nvme_files['nvme-multi2.img']=4G 00:00:53.031 + nvme_files['nvme-openstack.img']=8G 00:00:53.031 + nvme_files['nvme-zns.img']=5G 00:00:53.031 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:53.031 + (( SPDK_TEST_FTL == 1 )) 00:00:53.031 + nvme_files["nvme-ftl.img"]=6G 00:00:53.031 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:53.032 + nvme_files["nvme-fdp.img"]=1G 00:00:53.032 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:53.032 + for nvme in "${!nvme_files[@]}" 00:00:53.032 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:53.032 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.032 + for nvme in "${!nvme_files[@]}" 00:00:53.032 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:00:53.032 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:53.032 + for nvme in "${!nvme_files[@]}" 00:00:53.032 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:53.032 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.290 + for nvme in "${!nvme_files[@]}" 00:00:53.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:53.290 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:53.290 + for nvme in "${!nvme_files[@]}" 00:00:53.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:53.290 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.290 + for nvme in "${!nvme_files[@]}" 00:00:53.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:53.290 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.290 + for nvme in "${!nvme_files[@]}" 00:00:53.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:53.290 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.290 + for nvme in "${!nvme_files[@]}" 00:00:53.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:00:53.290 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:53.290 + for nvme in "${!nvme_files[@]}" 00:00:53.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:53.547 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.547 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:53.547 + echo 'End stage prepare_nvme.sh' 00:00:53.547 End stage prepare_nvme.sh 00:00:53.555 [Pipeline] sh 00:00:53.828 + DISTRO=fedora39 00:00:53.828 + CPUS=10 00:00:53.828 + RAM=12288 00:00:53.828 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.828 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:53.828 00:00:53.828 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:53.829 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:53.829 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:53.829 HELP=0 00:00:53.829 DRY_RUN=0 00:00:53.829 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:00:53.829 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:53.829 NVME_AUTO_CREATE=0 00:00:53.829 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:00:53.829 NVME_CMB=,,,, 00:00:53.829 NVME_PMR=,,,, 00:00:53.829 NVME_ZNS=,,,, 00:00:53.829 NVME_MS=true,,,, 00:00:53.829 NVME_FDP=,,,on, 00:00:53.829 SPDK_VAGRANT_DISTRO=fedora39 00:00:53.829 SPDK_VAGRANT_VMCPU=10 00:00:53.829 SPDK_VAGRANT_VMRAM=12288 00:00:53.829 SPDK_VAGRANT_PROVIDER=libvirt 00:00:53.829 SPDK_VAGRANT_HTTP_PROXY= 00:00:53.829 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:53.829 SPDK_OPENSTACK_NETWORK=0 00:00:53.829 VAGRANT_PACKAGE_BOX=0 00:00:53.829 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:53.829 FORCE_DISTRO=true 00:00:53.829 VAGRANT_BOX_VERSION= 00:00:53.829 EXTRA_VAGRANTFILES= 00:00:53.829 NIC_MODEL=e1000 00:00:53.829 00:00:53.829 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:53.829 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:56.358 Bringing machine 'default' up with 'libvirt' provider... 00:00:56.616 ==> default: Creating image (snapshot of base box volume). 00:00:56.874 ==> default: Creating domain with the following settings... 00:00:56.874 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732106001_5c7a18175daf0e5cf77b 00:00:56.874 ==> default: -- Domain type: kvm 00:00:56.874 ==> default: -- Cpus: 10 00:00:56.874 ==> default: -- Feature: acpi 00:00:56.874 ==> default: -- Feature: apic 00:00:56.874 ==> default: -- Feature: pae 00:00:56.874 ==> default: -- Memory: 12288M 00:00:56.874 ==> default: -- Memory Backing: hugepages: 00:00:56.874 ==> default: -- Management MAC: 00:00:56.874 ==> default: -- Loader: 00:00:56.874 ==> default: -- Nvram: 00:00:56.874 ==> default: -- Base box: spdk/fedora39 00:00:56.874 ==> default: -- Storage pool: default 00:00:56.874 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732106001_5c7a18175daf0e5cf77b.img (20G) 00:00:56.874 ==> default: -- Volume Cache: default 00:00:56.874 ==> default: -- Kernel: 00:00:56.874 ==> default: -- Initrd: 00:00:56.874 ==> default: -- Graphics Type: vnc 00:00:56.874 ==> default: -- Graphics Port: -1 00:00:56.874 ==> default: -- Graphics IP: 127.0.0.1 00:00:56.874 ==> default: -- Graphics Password: Not defined 00:00:56.874 ==> default: -- Video Type: cirrus 00:00:56.874 ==> default: -- Video VRAM: 9216 00:00:56.874 ==> default: -- Sound Type: 00:00:56.874 ==> default: -- Keymap: en-us 00:00:56.874 ==> default: -- TPM Path: 00:00:56.874 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:56.874 ==> default: -- Command line args: 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:56.874 ==> default: -> value=-drive, 00:00:56.874 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:56.874 ==> default: -> value=-drive, 00:00:56.874 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:56.874 ==> default: -> value=-drive, 00:00:56.874 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.874 ==> default: -> value=-drive, 00:00:56.874 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.874 ==> default: -> value=-drive, 00:00:56.874 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:56.874 ==> default: -> value=-drive, 00:00:56.874 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:56.874 ==> default: -> value=-device, 00:00:56.874 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.874 ==> default: Creating shared folders metadata... 00:00:57.132 ==> default: Starting domain. 00:00:58.506 ==> default: Waiting for domain to get an IP address... 00:01:16.580 ==> default: Waiting for SSH to become available... 00:01:16.580 ==> default: Configuring and enabling network interfaces... 00:01:18.484 default: SSH address: 192.168.121.181:22 00:01:18.484 default: SSH username: vagrant 00:01:18.484 default: SSH auth method: private key 00:01:20.388 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.008 ==> default: Mounting SSHFS shared folder... 00:01:28.918 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.918 ==> default: Checking Mount.. 00:01:29.852 ==> default: Folder Successfully Mounted! 00:01:29.852 00:01:29.852 SUCCESS! 00:01:29.852 00:01:29.852 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:29.852 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:29.852 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:29.852 00:01:29.860 [Pipeline] } 00:01:29.876 [Pipeline] // stage 00:01:29.885 [Pipeline] dir 00:01:29.886 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:29.888 [Pipeline] { 00:01:29.901 [Pipeline] catchError 00:01:29.902 [Pipeline] { 00:01:29.915 [Pipeline] sh 00:01:30.193 + vagrant ssh-config --host vagrant 00:01:30.193 + sed -ne '/^Host/,$p' 00:01:30.193 + tee ssh_conf 00:01:32.730 Host vagrant 00:01:32.730 HostName 192.168.121.181 00:01:32.730 User vagrant 00:01:32.730 Port 22 00:01:32.730 UserKnownHostsFile /dev/null 00:01:32.730 StrictHostKeyChecking no 00:01:32.730 PasswordAuthentication no 00:01:32.730 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:32.730 IdentitiesOnly yes 00:01:32.730 LogLevel FATAL 00:01:32.730 ForwardAgent yes 00:01:32.730 ForwardX11 yes 00:01:32.730 00:01:32.744 [Pipeline] withEnv 00:01:32.746 [Pipeline] { 00:01:32.759 [Pipeline] sh 00:01:33.044 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:33.044 source /etc/os-release 00:01:33.044 [[ -e /image.version ]] && img=$(< /image.version) 00:01:33.044 # Minimal, systemd-like check. 00:01:33.044 if [[ -e /.dockerenv ]]; then 00:01:33.044 # Clear garbage from the node'\''s name: 00:01:33.044 # agt-er_autotest_547-896 -> autotest_547-896 00:01:33.045 # $HOSTNAME is the actual container id 00:01:33.045 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:33.045 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:33.045 # We can assume this is a mount from a host where container is running, 00:01:33.045 # so fetch its hostname to easily identify the target swarm worker. 00:01:33.045 container="$(< /etc/hostname) ($agent)" 00:01:33.045 else 00:01:33.045 # Fallback 00:01:33.045 container=$agent 00:01:33.045 fi 00:01:33.045 fi 00:01:33.045 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:33.045 ' 00:01:33.056 [Pipeline] } 00:01:33.067 [Pipeline] // withEnv 00:01:33.074 [Pipeline] setCustomBuildProperty 00:01:33.084 [Pipeline] stage 00:01:33.085 [Pipeline] { (Tests) 00:01:33.098 [Pipeline] sh 00:01:33.382 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:33.398 [Pipeline] sh 00:01:33.687 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:33.963 [Pipeline] timeout 00:01:33.963 Timeout set to expire in 50 min 00:01:33.964 [Pipeline] { 00:01:33.977 [Pipeline] sh 00:01:34.259 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:34.521 HEAD is now at bc5264bd5 nvme: Fix discovery loop when target has no entry 00:01:34.536 [Pipeline] sh 00:01:34.823 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:35.099 [Pipeline] sh 00:01:35.383 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:35.401 [Pipeline] sh 00:01:35.758 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:35.758 ++ readlink -f spdk_repo 00:01:35.759 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:35.759 + [[ -n /home/vagrant/spdk_repo ]] 00:01:35.759 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:35.759 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:35.759 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:35.759 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:35.759 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:35.759 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:35.759 + cd /home/vagrant/spdk_repo 00:01:35.759 + source /etc/os-release 00:01:35.759 ++ NAME='Fedora Linux' 00:01:35.759 ++ VERSION='39 (Cloud Edition)' 00:01:35.759 ++ ID=fedora 00:01:35.759 ++ VERSION_ID=39 00:01:35.759 ++ VERSION_CODENAME= 00:01:35.759 ++ PLATFORM_ID=platform:f39 00:01:35.759 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:35.759 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.759 ++ LOGO=fedora-logo-icon 00:01:35.759 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:35.759 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.759 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:35.759 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.759 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.759 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.759 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:35.759 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.759 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:35.759 ++ SUPPORT_END=2024-11-12 00:01:35.759 ++ VARIANT='Cloud Edition' 00:01:35.759 ++ VARIANT_ID=cloud 00:01:35.759 + uname -a 00:01:35.759 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:35.759 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:36.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:36.281 Hugepages 00:01:36.281 node hugesize free / total 00:01:36.281 node0 1048576kB 0 / 0 00:01:36.281 node0 2048kB 0 / 0 00:01:36.281 00:01:36.281 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.542 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:36.542 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:36.542 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:36.542 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:36.542 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:36.542 + rm -f /tmp/spdk-ld-path 00:01:36.542 + source autorun-spdk.conf 00:01:36.542 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.542 ++ SPDK_TEST_NVME=1 00:01:36.542 ++ SPDK_TEST_FTL=1 00:01:36.542 ++ SPDK_TEST_ISAL=1 00:01:36.542 ++ SPDK_RUN_ASAN=1 00:01:36.542 ++ SPDK_RUN_UBSAN=1 00:01:36.542 ++ SPDK_TEST_XNVME=1 00:01:36.542 ++ SPDK_TEST_NVME_FDP=1 00:01:36.542 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.542 ++ RUN_NIGHTLY=0 00:01:36.543 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.543 + [[ -n '' ]] 00:01:36.543 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:36.543 + for M in /var/spdk/build-*-manifest.txt 00:01:36.543 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:36.543 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.543 + for M in /var/spdk/build-*-manifest.txt 00:01:36.543 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.543 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.543 + for M in /var/spdk/build-*-manifest.txt 00:01:36.543 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.543 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.543 ++ uname 00:01:36.543 + [[ Linux == \L\i\n\u\x ]] 00:01:36.543 + sudo dmesg -T 00:01:36.543 + sudo dmesg --clear 00:01:36.543 + dmesg_pid=5019 00:01:36.543 + sudo dmesg -Tw 00:01:36.543 + [[ Fedora Linux == FreeBSD ]] 00:01:36.543 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.543 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.543 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.543 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.543 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.543 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.543 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.543 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.543 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.543 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.543 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.543 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.543 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.543 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.543 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.543 12:34:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:36.543 12:34:02 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.543 12:34:02 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:36.543 12:34:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:36.543 12:34:02 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.804 12:34:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:36.804 12:34:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:36.804 12:34:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:36.804 12:34:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:36.804 12:34:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.804 12:34:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.804 12:34:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.804 12:34:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.804 12:34:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.804 12:34:02 -- paths/export.sh@5 -- $ export PATH 00:01:36.805 12:34:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.805 12:34:02 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:36.805 12:34:02 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:36.805 12:34:02 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732106042.XXXXXX 00:01:36.805 12:34:02 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732106042.fQsYn0 00:01:36.805 12:34:02 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:36.805 12:34:02 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:36.805 12:34:02 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:36.805 12:34:02 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:36.805 12:34:02 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:36.805 12:34:02 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:36.805 12:34:02 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:36.805 12:34:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.805 12:34:02 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:36.805 12:34:02 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:36.805 12:34:02 -- pm/common@17 -- $ local monitor 00:01:36.805 12:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.805 12:34:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.805 12:34:02 -- pm/common@25 -- $ sleep 1 00:01:36.805 12:34:02 -- pm/common@21 -- $ date +%s 00:01:36.805 12:34:02 -- pm/common@21 -- $ date +%s 00:01:36.805 12:34:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732106042 00:01:36.805 12:34:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732106042 00:01:36.805 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732106042_collect-cpu-load.pm.log 00:01:36.805 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732106042_collect-vmstat.pm.log 00:01:37.747 12:34:03 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:37.747 12:34:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:37.747 12:34:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:37.747 12:34:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:37.747 12:34:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:37.747 Wed Nov 20 12:34:03 PM UTC 2024 00:01:37.748 12:34:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:37.748 v25.01-pre-220-gbc5264bd5 00:01:37.748 12:34:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:37.748 12:34:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:37.748 12:34:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:37.748 12:34:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:37.748 12:34:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.748 ************************************ 00:01:37.748 START TEST asan 00:01:37.748 ************************************ 00:01:37.748 using asan 00:01:37.748 12:34:03 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:37.748 00:01:37.748 real 0m0.000s 00:01:37.748 user 0m0.000s 00:01:37.748 sys 0m0.000s 00:01:37.748 12:34:03 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:37.748 ************************************ 00:01:37.748 END TEST asan 00:01:37.748 ************************************ 00:01:37.748 12:34:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.748 12:34:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:37.748 12:34:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:37.748 12:34:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:37.748 12:34:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:37.748 12:34:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.748 ************************************ 00:01:37.748 START TEST ubsan 00:01:37.748 ************************************ 00:01:37.748 using ubsan 00:01:37.748 12:34:03 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:37.748 00:01:37.748 real 0m0.000s 00:01:37.748 user 0m0.000s 00:01:37.748 sys 0m0.000s 00:01:37.748 12:34:03 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:37.748 12:34:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.748 ************************************ 00:01:37.748 END TEST ubsan 00:01:37.748 ************************************ 00:01:37.748 12:34:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:37.748 12:34:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:37.748 12:34:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:37.748 12:34:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:37.748 12:34:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:37.748 12:34:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:37.748 12:34:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:37.748 12:34:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:37.748 12:34:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:38.009 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:38.009 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:38.270 Using 'verbs' RDMA provider 00:01:49.203 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:01.435 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:01.435 Creating mk/config.mk...done. 00:02:01.435 Creating mk/cc.flags.mk...done. 00:02:01.435 Type 'make' to build. 00:02:01.435 12:34:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:01.435 12:34:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:01.435 12:34:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:01.435 12:34:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.435 ************************************ 00:02:01.435 START TEST make 00:02:01.435 ************************************ 00:02:01.435 12:34:25 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:01.435 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:01.435 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:01.435 meson setup builddir \ 00:02:01.435 -Dwith-libaio=enabled \ 00:02:01.435 -Dwith-liburing=enabled \ 00:02:01.435 -Dwith-libvfn=disabled \ 00:02:01.435 -Dwith-spdk=disabled \ 00:02:01.435 -Dexamples=false \ 00:02:01.435 -Dtests=false \ 00:02:01.435 -Dtools=false && \ 00:02:01.435 meson compile -C builddir && \ 00:02:01.435 cd -) 00:02:01.435 make[1]: Nothing to be done for 'all'. 00:02:02.378 The Meson build system 00:02:02.378 Version: 1.5.0 00:02:02.378 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:02.378 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:02.378 Build type: native build 00:02:02.378 Project name: xnvme 00:02:02.378 Project version: 0.7.5 00:02:02.378 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.378 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.378 Host machine cpu family: x86_64 00:02:02.378 Host machine cpu: x86_64 00:02:02.378 Message: host_machine.system: linux 00:02:02.378 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:02.378 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:02.378 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:02.378 Run-time dependency threads found: YES 00:02:02.378 Has header "setupapi.h" : NO 00:02:02.378 Has header "linux/blkzoned.h" : YES 00:02:02.378 Has header "linux/blkzoned.h" : YES (cached) 00:02:02.378 Has header "libaio.h" : YES 00:02:02.378 Library aio found: YES 00:02:02.378 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.378 Run-time dependency liburing found: YES 2.2 00:02:02.378 Dependency libvfn skipped: feature with-libvfn disabled 00:02:02.378 Found CMake: /usr/bin/cmake (3.27.7) 00:02:02.378 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:02.378 Subproject spdk : skipped: feature with-spdk disabled 00:02:02.378 Run-time dependency appleframeworks found: NO (tried framework) 00:02:02.378 Run-time dependency appleframeworks found: NO (tried framework) 00:02:02.378 Library rt found: YES 00:02:02.378 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:02.378 Configuring xnvme_config.h using configuration 00:02:02.378 Configuring xnvme.spec using configuration 00:02:02.378 Run-time dependency bash-completion found: YES 2.11 00:02:02.378 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:02.378 Program cp found: YES (/usr/bin/cp) 00:02:02.378 Build targets in project: 3 00:02:02.378 00:02:02.378 xnvme 0.7.5 00:02:02.378 00:02:02.378 Subprojects 00:02:02.378 spdk : NO Feature 'with-spdk' disabled 00:02:02.378 00:02:02.378 User defined options 00:02:02.378 examples : false 00:02:02.378 tests : false 00:02:02.378 tools : false 00:02:02.378 with-libaio : enabled 00:02:02.378 with-liburing: enabled 00:02:02.378 with-libvfn : disabled 00:02:02.378 with-spdk : disabled 00:02:02.378 00:02:02.378 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.638 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:02.638 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:02.899 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:02.899 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:02.899 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:02.899 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:02.899 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:02.899 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:02.899 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:02.899 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:02.899 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:02.899 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:02.899 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:02.899 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:02.899 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:02.899 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:02.899 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:02.899 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:02.899 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:02.899 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:02.899 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:03.161 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:03.161 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:03.161 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:03.161 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:03.161 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:03.161 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:03.161 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:03.161 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:03.161 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:03.161 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:03.161 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:03.161 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:03.161 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:03.161 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:03.161 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:03.161 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:03.161 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:03.161 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:03.161 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:03.161 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:03.161 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:03.161 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:03.161 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:03.161 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:03.161 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:03.161 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:03.161 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:03.161 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:03.161 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:03.161 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:03.161 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:03.161 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:03.161 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:03.161 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:03.161 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:03.161 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:03.423 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:03.423 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:03.423 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:03.423 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:03.423 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:03.423 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:03.423 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:03.423 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:03.423 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:03.423 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:03.423 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:03.423 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:03.423 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:03.423 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:03.685 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:03.685 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:03.685 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:03.944 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:03.944 [75/76] Linking static target lib/libxnvme.a 00:02:03.944 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:03.944 INFO: autodetecting backend as ninja 00:02:03.944 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:03.944 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:10.557 The Meson build system 00:02:10.557 Version: 1.5.0 00:02:10.557 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:10.557 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:10.557 Build type: native build 00:02:10.557 Program cat found: YES (/usr/bin/cat) 00:02:10.557 Project name: DPDK 00:02:10.557 Project version: 24.03.0 00:02:10.557 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:10.557 C linker for the host machine: cc ld.bfd 2.40-14 00:02:10.557 Host machine cpu family: x86_64 00:02:10.557 Host machine cpu: x86_64 00:02:10.557 Message: ## Building in Developer Mode ## 00:02:10.557 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.557 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:10.557 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.557 Program python3 found: YES (/usr/bin/python3) 00:02:10.557 Program cat found: YES (/usr/bin/cat) 00:02:10.557 Compiler for C supports arguments -march=native: YES 00:02:10.557 Checking for size of "void *" : 8 00:02:10.557 Checking for size of "void *" : 8 (cached) 00:02:10.557 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:10.557 Library m found: YES 00:02:10.557 Library numa found: YES 00:02:10.557 Has header "numaif.h" : YES 00:02:10.557 Library fdt found: NO 00:02:10.557 Library execinfo found: NO 00:02:10.557 Has header "execinfo.h" : YES 00:02:10.557 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.557 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.557 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.557 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.557 Run-time dependency openssl found: YES 3.1.1 00:02:10.557 Run-time dependency libpcap found: YES 1.10.4 00:02:10.557 Has header "pcap.h" with dependency libpcap: YES 00:02:10.557 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.557 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.557 Compiler for C supports arguments -Wformat: YES 00:02:10.557 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:10.557 Compiler for C supports arguments -Wformat-security: NO 00:02:10.557 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.557 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.557 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.557 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.557 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.557 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.557 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.557 Compiler for C supports arguments -Wundef: YES 00:02:10.557 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.557 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.557 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:10.557 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.557 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:10.557 Program objdump found: YES (/usr/bin/objdump) 00:02:10.557 Compiler for C supports arguments -mavx512f: YES 00:02:10.557 Checking if "AVX512 checking" compiles: YES 00:02:10.557 Fetching value of define "__SSE4_2__" : 1 00:02:10.557 Fetching value of define "__AES__" : 1 00:02:10.557 Fetching value of define "__AVX__" : 1 00:02:10.557 Fetching value of define "__AVX2__" : 1 00:02:10.557 Fetching value of define "__AVX512BW__" : 1 00:02:10.557 Fetching value of define "__AVX512CD__" : 1 00:02:10.557 Fetching value of define "__AVX512DQ__" : 1 00:02:10.557 Fetching value of define "__AVX512F__" : 1 00:02:10.557 Fetching value of define "__AVX512VL__" : 1 00:02:10.557 Fetching value of define "__PCLMUL__" : 1 00:02:10.557 Fetching value of define "__RDRND__" : 1 00:02:10.557 Fetching value of define "__RDSEED__" : 1 00:02:10.557 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:10.557 Fetching value of define "__znver1__" : (undefined) 00:02:10.557 Fetching value of define "__znver2__" : (undefined) 00:02:10.557 Fetching value of define "__znver3__" : (undefined) 00:02:10.557 Fetching value of define "__znver4__" : (undefined) 00:02:10.557 Library asan found: YES 00:02:10.557 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:10.557 Message: lib/log: Defining dependency "log" 00:02:10.557 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.557 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.557 Library rt found: YES 00:02:10.557 Checking for function "getentropy" : NO 00:02:10.557 Message: lib/eal: Defining dependency "eal" 00:02:10.557 Message: lib/ring: Defining dependency "ring" 00:02:10.557 Message: lib/rcu: Defining dependency "rcu" 00:02:10.557 Message: lib/mempool: Defining dependency "mempool" 00:02:10.557 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.557 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.557 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:10.557 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:10.557 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:10.557 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:10.558 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:10.558 Compiler for C supports arguments -mpclmul: YES 00:02:10.558 Compiler for C supports arguments -maes: YES 00:02:10.558 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.558 Compiler for C supports arguments -mavx512bw: YES 00:02:10.558 Compiler for C supports arguments -mavx512dq: YES 00:02:10.558 Compiler for C supports arguments -mavx512vl: YES 00:02:10.558 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.558 Compiler for C supports arguments -mavx2: YES 00:02:10.558 Compiler for C supports arguments -mavx: YES 00:02:10.558 Message: lib/net: Defining dependency "net" 00:02:10.558 Message: lib/meter: Defining dependency "meter" 00:02:10.558 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.558 Message: lib/pci: Defining dependency "pci" 00:02:10.558 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.558 Message: lib/hash: Defining dependency "hash" 00:02:10.558 Message: lib/timer: Defining dependency "timer" 00:02:10.558 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.558 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.558 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.558 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.558 Message: lib/power: Defining dependency "power" 00:02:10.558 Message: lib/reorder: Defining dependency "reorder" 00:02:10.558 Message: lib/security: Defining dependency "security" 00:02:10.558 Has header "linux/userfaultfd.h" : YES 00:02:10.558 Has header "linux/vduse.h" : YES 00:02:10.558 Message: lib/vhost: Defining dependency "vhost" 00:02:10.558 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:10.558 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:10.558 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:10.558 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:10.558 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:10.558 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:10.558 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:10.558 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:10.558 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:10.558 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:10.558 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:10.558 Configuring doxy-api-html.conf using configuration 00:02:10.558 Configuring doxy-api-man.conf using configuration 00:02:10.558 Program mandb found: YES (/usr/bin/mandb) 00:02:10.558 Program sphinx-build found: NO 00:02:10.558 Configuring rte_build_config.h using configuration 00:02:10.558 Message: 00:02:10.558 ================= 00:02:10.558 Applications Enabled 00:02:10.558 ================= 00:02:10.558 00:02:10.558 apps: 00:02:10.558 00:02:10.558 00:02:10.558 Message: 00:02:10.558 ================= 00:02:10.558 Libraries Enabled 00:02:10.558 ================= 00:02:10.558 00:02:10.558 libs: 00:02:10.558 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:10.558 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:10.558 cryptodev, dmadev, power, reorder, security, vhost, 00:02:10.558 00:02:10.558 Message: 00:02:10.558 =============== 00:02:10.558 Drivers Enabled 00:02:10.558 =============== 00:02:10.558 00:02:10.558 common: 00:02:10.558 00:02:10.558 bus: 00:02:10.558 pci, vdev, 00:02:10.558 mempool: 00:02:10.558 ring, 00:02:10.558 dma: 00:02:10.558 00:02:10.558 net: 00:02:10.558 00:02:10.558 crypto: 00:02:10.558 00:02:10.558 compress: 00:02:10.558 00:02:10.558 vdpa: 00:02:10.558 00:02:10.558 00:02:10.558 Message: 00:02:10.558 ================= 00:02:10.558 Content Skipped 00:02:10.558 ================= 00:02:10.558 00:02:10.558 apps: 00:02:10.558 dumpcap: explicitly disabled via build config 00:02:10.558 graph: explicitly disabled via build config 00:02:10.558 pdump: explicitly disabled via build config 00:02:10.558 proc-info: explicitly disabled via build config 00:02:10.558 test-acl: explicitly disabled via build config 00:02:10.558 test-bbdev: explicitly disabled via build config 00:02:10.558 test-cmdline: explicitly disabled via build config 00:02:10.558 test-compress-perf: explicitly disabled via build config 00:02:10.558 test-crypto-perf: explicitly disabled via build config 00:02:10.558 test-dma-perf: explicitly disabled via build config 00:02:10.558 test-eventdev: explicitly disabled via build config 00:02:10.558 test-fib: explicitly disabled via build config 00:02:10.558 test-flow-perf: explicitly disabled via build config 00:02:10.558 test-gpudev: explicitly disabled via build config 00:02:10.558 test-mldev: explicitly disabled via build config 00:02:10.558 test-pipeline: explicitly disabled via build config 00:02:10.558 test-pmd: explicitly disabled via build config 00:02:10.558 test-regex: explicitly disabled via build config 00:02:10.558 test-sad: explicitly disabled via build config 00:02:10.558 test-security-perf: explicitly disabled via build config 00:02:10.558 00:02:10.558 libs: 00:02:10.558 argparse: explicitly disabled via build config 00:02:10.558 metrics: explicitly disabled via build config 00:02:10.558 acl: explicitly disabled via build config 00:02:10.558 bbdev: explicitly disabled via build config 00:02:10.558 bitratestats: explicitly disabled via build config 00:02:10.558 bpf: explicitly disabled via build config 00:02:10.558 cfgfile: explicitly disabled via build config 00:02:10.558 distributor: explicitly disabled via build config 00:02:10.558 efd: explicitly disabled via build config 00:02:10.558 eventdev: explicitly disabled via build config 00:02:10.558 dispatcher: explicitly disabled via build config 00:02:10.558 gpudev: explicitly disabled via build config 00:02:10.558 gro: explicitly disabled via build config 00:02:10.558 gso: explicitly disabled via build config 00:02:10.558 ip_frag: explicitly disabled via build config 00:02:10.558 jobstats: explicitly disabled via build config 00:02:10.558 latencystats: explicitly disabled via build config 00:02:10.558 lpm: explicitly disabled via build config 00:02:10.558 member: explicitly disabled via build config 00:02:10.558 pcapng: explicitly disabled via build config 00:02:10.558 rawdev: explicitly disabled via build config 00:02:10.558 regexdev: explicitly disabled via build config 00:02:10.558 mldev: explicitly disabled via build config 00:02:10.558 rib: explicitly disabled via build config 00:02:10.558 sched: explicitly disabled via build config 00:02:10.558 stack: explicitly disabled via build config 00:02:10.558 ipsec: explicitly disabled via build config 00:02:10.558 pdcp: explicitly disabled via build config 00:02:10.558 fib: explicitly disabled via build config 00:02:10.558 port: explicitly disabled via build config 00:02:10.558 pdump: explicitly disabled via build config 00:02:10.558 table: explicitly disabled via build config 00:02:10.558 pipeline: explicitly disabled via build config 00:02:10.558 graph: explicitly disabled via build config 00:02:10.558 node: explicitly disabled via build config 00:02:10.559 00:02:10.559 drivers: 00:02:10.559 common/cpt: not in enabled drivers build config 00:02:10.559 common/dpaax: not in enabled drivers build config 00:02:10.559 common/iavf: not in enabled drivers build config 00:02:10.559 common/idpf: not in enabled drivers build config 00:02:10.559 common/ionic: not in enabled drivers build config 00:02:10.559 common/mvep: not in enabled drivers build config 00:02:10.559 common/octeontx: not in enabled drivers build config 00:02:10.559 bus/auxiliary: not in enabled drivers build config 00:02:10.559 bus/cdx: not in enabled drivers build config 00:02:10.559 bus/dpaa: not in enabled drivers build config 00:02:10.559 bus/fslmc: not in enabled drivers build config 00:02:10.559 bus/ifpga: not in enabled drivers build config 00:02:10.559 bus/platform: not in enabled drivers build config 00:02:10.559 bus/uacce: not in enabled drivers build config 00:02:10.559 bus/vmbus: not in enabled drivers build config 00:02:10.559 common/cnxk: not in enabled drivers build config 00:02:10.559 common/mlx5: not in enabled drivers build config 00:02:10.559 common/nfp: not in enabled drivers build config 00:02:10.559 common/nitrox: not in enabled drivers build config 00:02:10.559 common/qat: not in enabled drivers build config 00:02:10.559 common/sfc_efx: not in enabled drivers build config 00:02:10.559 mempool/bucket: not in enabled drivers build config 00:02:10.559 mempool/cnxk: not in enabled drivers build config 00:02:10.559 mempool/dpaa: not in enabled drivers build config 00:02:10.559 mempool/dpaa2: not in enabled drivers build config 00:02:10.559 mempool/octeontx: not in enabled drivers build config 00:02:10.559 mempool/stack: not in enabled drivers build config 00:02:10.559 dma/cnxk: not in enabled drivers build config 00:02:10.559 dma/dpaa: not in enabled drivers build config 00:02:10.559 dma/dpaa2: not in enabled drivers build config 00:02:10.559 dma/hisilicon: not in enabled drivers build config 00:02:10.559 dma/idxd: not in enabled drivers build config 00:02:10.559 dma/ioat: not in enabled drivers build config 00:02:10.559 dma/skeleton: not in enabled drivers build config 00:02:10.559 net/af_packet: not in enabled drivers build config 00:02:10.559 net/af_xdp: not in enabled drivers build config 00:02:10.559 net/ark: not in enabled drivers build config 00:02:10.559 net/atlantic: not in enabled drivers build config 00:02:10.559 net/avp: not in enabled drivers build config 00:02:10.559 net/axgbe: not in enabled drivers build config 00:02:10.559 net/bnx2x: not in enabled drivers build config 00:02:10.559 net/bnxt: not in enabled drivers build config 00:02:10.559 net/bonding: not in enabled drivers build config 00:02:10.559 net/cnxk: not in enabled drivers build config 00:02:10.559 net/cpfl: not in enabled drivers build config 00:02:10.559 net/cxgbe: not in enabled drivers build config 00:02:10.559 net/dpaa: not in enabled drivers build config 00:02:10.559 net/dpaa2: not in enabled drivers build config 00:02:10.559 net/e1000: not in enabled drivers build config 00:02:10.559 net/ena: not in enabled drivers build config 00:02:10.559 net/enetc: not in enabled drivers build config 00:02:10.559 net/enetfec: not in enabled drivers build config 00:02:10.559 net/enic: not in enabled drivers build config 00:02:10.559 net/failsafe: not in enabled drivers build config 00:02:10.559 net/fm10k: not in enabled drivers build config 00:02:10.559 net/gve: not in enabled drivers build config 00:02:10.559 net/hinic: not in enabled drivers build config 00:02:10.559 net/hns3: not in enabled drivers build config 00:02:10.559 net/i40e: not in enabled drivers build config 00:02:10.559 net/iavf: not in enabled drivers build config 00:02:10.559 net/ice: not in enabled drivers build config 00:02:10.559 net/idpf: not in enabled drivers build config 00:02:10.559 net/igc: not in enabled drivers build config 00:02:10.559 net/ionic: not in enabled drivers build config 00:02:10.559 net/ipn3ke: not in enabled drivers build config 00:02:10.559 net/ixgbe: not in enabled drivers build config 00:02:10.559 net/mana: not in enabled drivers build config 00:02:10.559 net/memif: not in enabled drivers build config 00:02:10.559 net/mlx4: not in enabled drivers build config 00:02:10.559 net/mlx5: not in enabled drivers build config 00:02:10.559 net/mvneta: not in enabled drivers build config 00:02:10.559 net/mvpp2: not in enabled drivers build config 00:02:10.559 net/netvsc: not in enabled drivers build config 00:02:10.559 net/nfb: not in enabled drivers build config 00:02:10.559 net/nfp: not in enabled drivers build config 00:02:10.559 net/ngbe: not in enabled drivers build config 00:02:10.559 net/null: not in enabled drivers build config 00:02:10.559 net/octeontx: not in enabled drivers build config 00:02:10.559 net/octeon_ep: not in enabled drivers build config 00:02:10.559 net/pcap: not in enabled drivers build config 00:02:10.559 net/pfe: not in enabled drivers build config 00:02:10.559 net/qede: not in enabled drivers build config 00:02:10.559 net/ring: not in enabled drivers build config 00:02:10.559 net/sfc: not in enabled drivers build config 00:02:10.559 net/softnic: not in enabled drivers build config 00:02:10.559 net/tap: not in enabled drivers build config 00:02:10.559 net/thunderx: not in enabled drivers build config 00:02:10.559 net/txgbe: not in enabled drivers build config 00:02:10.559 net/vdev_netvsc: not in enabled drivers build config 00:02:10.559 net/vhost: not in enabled drivers build config 00:02:10.559 net/virtio: not in enabled drivers build config 00:02:10.559 net/vmxnet3: not in enabled drivers build config 00:02:10.559 raw/*: missing internal dependency, "rawdev" 00:02:10.559 crypto/armv8: not in enabled drivers build config 00:02:10.559 crypto/bcmfs: not in enabled drivers build config 00:02:10.559 crypto/caam_jr: not in enabled drivers build config 00:02:10.559 crypto/ccp: not in enabled drivers build config 00:02:10.559 crypto/cnxk: not in enabled drivers build config 00:02:10.559 crypto/dpaa_sec: not in enabled drivers build config 00:02:10.559 crypto/dpaa2_sec: not in enabled drivers build config 00:02:10.559 crypto/ipsec_mb: not in enabled drivers build config 00:02:10.559 crypto/mlx5: not in enabled drivers build config 00:02:10.559 crypto/mvsam: not in enabled drivers build config 00:02:10.559 crypto/nitrox: not in enabled drivers build config 00:02:10.559 crypto/null: not in enabled drivers build config 00:02:10.559 crypto/octeontx: not in enabled drivers build config 00:02:10.559 crypto/openssl: not in enabled drivers build config 00:02:10.559 crypto/scheduler: not in enabled drivers build config 00:02:10.559 crypto/uadk: not in enabled drivers build config 00:02:10.559 crypto/virtio: not in enabled drivers build config 00:02:10.559 compress/isal: not in enabled drivers build config 00:02:10.559 compress/mlx5: not in enabled drivers build config 00:02:10.559 compress/nitrox: not in enabled drivers build config 00:02:10.559 compress/octeontx: not in enabled drivers build config 00:02:10.559 compress/zlib: not in enabled drivers build config 00:02:10.559 regex/*: missing internal dependency, "regexdev" 00:02:10.559 ml/*: missing internal dependency, "mldev" 00:02:10.559 vdpa/ifc: not in enabled drivers build config 00:02:10.559 vdpa/mlx5: not in enabled drivers build config 00:02:10.559 vdpa/nfp: not in enabled drivers build config 00:02:10.559 vdpa/sfc: not in enabled drivers build config 00:02:10.559 event/*: missing internal dependency, "eventdev" 00:02:10.559 baseband/*: missing internal dependency, "bbdev" 00:02:10.559 gpu/*: missing internal dependency, "gpudev" 00:02:10.559 00:02:10.559 00:02:10.559 Build targets in project: 84 00:02:10.559 00:02:10.559 DPDK 24.03.0 00:02:10.559 00:02:10.559 User defined options 00:02:10.559 buildtype : debug 00:02:10.559 default_library : shared 00:02:10.559 libdir : lib 00:02:10.559 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:10.559 b_sanitize : address 00:02:10.559 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:10.559 c_link_args : 00:02:10.559 cpu_instruction_set: native 00:02:10.559 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:10.559 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:10.559 enable_docs : false 00:02:10.559 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:10.559 enable_kmods : false 00:02:10.559 max_lcores : 128 00:02:10.559 tests : false 00:02:10.559 00:02:10.559 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.125 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:11.125 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.125 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.125 [3/267] Linking static target lib/librte_log.a 00:02:11.125 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.125 [5/267] Linking static target lib/librte_kvargs.a 00:02:11.125 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.691 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.691 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.691 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.691 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.691 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.691 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.691 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.691 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.691 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.691 [16/267] Linking static target lib/librte_telemetry.a 00:02:11.691 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.949 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.949 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.949 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.949 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.949 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:11.949 [23/267] Linking target lib/librte_log.so.24.1 00:02:12.207 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.207 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.207 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.207 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:12.207 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.207 [29/267] Linking target lib/librte_kvargs.so.24.1 00:02:12.207 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.207 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.207 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.465 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.465 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.465 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:12.465 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.465 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.723 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.723 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.723 [40/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:12.723 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.723 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.723 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.723 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:12.723 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.723 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.981 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.981 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.981 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.981 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.981 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.981 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.238 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.238 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.239 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.239 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.239 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.239 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.239 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.497 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.497 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.497 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.497 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.497 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.497 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.497 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.754 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.754 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.755 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.755 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.755 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.755 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.011 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.011 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.011 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.011 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.011 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.011 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.268 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.268 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.268 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.268 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.526 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.526 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.526 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.526 [86/267] Linking static target lib/librte_ring.a 00:02:14.526 [87/267] Linking static target lib/librte_eal.a 00:02:14.526 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.526 [89/267] Linking static target lib/librte_rcu.a 00:02:14.784 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.784 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.784 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.784 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.784 [94/267] Linking static target lib/librte_mempool.a 00:02:14.784 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.784 [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.784 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.041 [98/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.041 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.041 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.041 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.041 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.299 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.299 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:15.299 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.299 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.299 [107/267] Linking static target lib/librte_meter.a 00:02:15.299 [108/267] Linking static target lib/librte_mbuf.a 00:02:15.299 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.299 [110/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.299 [111/267] Linking static target lib/librte_net.a 00:02:15.556 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.556 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.556 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.556 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.815 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.815 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.815 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.073 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.332 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.332 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.332 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.332 [123/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.332 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.332 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.332 [126/267] Linking static target lib/librte_pci.a 00:02:16.590 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.590 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.590 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.590 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.590 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.590 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.590 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.590 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.847 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.847 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.847 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.847 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.847 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.847 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.847 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.847 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.847 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.847 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.847 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.847 [146/267] Linking static target lib/librte_cmdline.a 00:02:17.104 [147/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.104 [148/267] Linking static target lib/librte_timer.a 00:02:17.104 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:17.104 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.362 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.362 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.362 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.362 [154/267] Linking static target lib/librte_ethdev.a 00:02:17.362 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.620 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.620 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.620 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.620 [159/267] Linking static target lib/librte_hash.a 00:02:17.620 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.620 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.620 [162/267] Linking static target lib/librte_compressdev.a 00:02:17.620 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.620 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.877 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.878 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:17.878 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.136 [168/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.136 [169/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.136 [170/267] Linking static target lib/librte_dmadev.a 00:02:18.136 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.136 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.394 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.394 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.394 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.394 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.394 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.394 [178/267] Linking static target lib/librte_cryptodev.a 00:02:18.394 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.652 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.652 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.652 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.652 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.652 [184/267] Linking static target lib/librte_power.a 00:02:18.652 [185/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.909 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.909 [187/267] Linking static target lib/librte_reorder.a 00:02:18.909 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.909 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.909 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.909 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.909 [192/267] Linking static target lib/librte_security.a 00:02:19.167 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.424 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.682 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.682 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.682 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:19.682 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.682 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.941 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:19.941 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:19.941 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:19.941 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:19.941 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.198 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.198 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.198 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.198 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.198 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.456 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.456 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.456 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.456 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.456 [214/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.456 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.456 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:20.456 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.456 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.456 [219/267] Linking static target drivers/librte_bus_pci.a 00:02:20.456 [220/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.713 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.714 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.714 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.714 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:20.714 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.972 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.229 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:22.602 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.602 [229/267] Linking target lib/librte_eal.so.24.1 00:02:22.602 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:22.602 [231/267] Linking target lib/librte_ring.so.24.1 00:02:22.602 [232/267] Linking target lib/librte_timer.so.24.1 00:02:22.602 [233/267] Linking target lib/librte_pci.so.24.1 00:02:22.602 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:22.602 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:22.602 [236/267] Linking target lib/librte_meter.so.24.1 00:02:22.602 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:22.602 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:22.602 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:22.602 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:22.602 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:22.602 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:22.602 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:22.860 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:22.860 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:22.860 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:22.860 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:22.860 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:23.118 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:23.118 [250/267] Linking target lib/librte_net.so.24.1 00:02:23.118 [251/267] Linking target lib/librte_cryptodev.so.24.1 00:02:23.118 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:23.118 [253/267] Linking target lib/librte_reorder.so.24.1 00:02:23.118 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:23.118 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:23.118 [256/267] Linking target lib/librte_security.so.24.1 00:02:23.118 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:23.118 [258/267] Linking target lib/librte_hash.so.24.1 00:02:23.375 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.375 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:23.375 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:23.375 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:23.375 [263/267] Linking target lib/librte_power.so.24.1 00:02:23.634 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.893 [265/267] Linking static target lib/librte_vhost.a 00:02:25.270 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.270 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:25.270 INFO: autodetecting backend as ninja 00:02:25.270 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:43.386 CC lib/ut_mock/mock.o 00:02:43.386 CC lib/ut/ut.o 00:02:43.386 CC lib/log/log_flags.o 00:02:43.386 CC lib/log/log.o 00:02:43.386 CC lib/log/log_deprecated.o 00:02:43.386 LIB libspdk_ut_mock.a 00:02:43.386 LIB libspdk_ut.a 00:02:43.386 LIB libspdk_log.a 00:02:43.386 SO libspdk_ut_mock.so.6.0 00:02:43.386 SO libspdk_ut.so.2.0 00:02:43.386 SO libspdk_log.so.7.1 00:02:43.386 SYMLINK libspdk_ut_mock.so 00:02:43.386 SYMLINK libspdk_ut.so 00:02:43.386 SYMLINK libspdk_log.so 00:02:43.386 CC lib/ioat/ioat.o 00:02:43.386 CC lib/dma/dma.o 00:02:43.386 CC lib/util/base64.o 00:02:43.386 CC lib/util/crc32.o 00:02:43.386 CC lib/util/cpuset.o 00:02:43.386 CC lib/util/bit_array.o 00:02:43.386 CC lib/util/crc16.o 00:02:43.386 CC lib/util/crc32c.o 00:02:43.386 CXX lib/trace_parser/trace.o 00:02:43.386 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.386 CC lib/util/crc32_ieee.o 00:02:43.386 CC lib/util/crc64.o 00:02:43.386 CC lib/util/dif.o 00:02:43.386 CC lib/util/fd.o 00:02:43.386 LIB libspdk_dma.a 00:02:43.386 CC lib/util/fd_group.o 00:02:43.386 SO libspdk_dma.so.5.0 00:02:43.386 CC lib/util/file.o 00:02:43.386 CC lib/util/hexlify.o 00:02:43.386 CC lib/util/iov.o 00:02:43.386 LIB libspdk_ioat.a 00:02:43.386 SYMLINK libspdk_dma.so 00:02:43.386 CC lib/util/math.o 00:02:43.386 CC lib/util/net.o 00:02:43.386 SO libspdk_ioat.so.7.0 00:02:43.386 SYMLINK libspdk_ioat.so 00:02:43.386 CC lib/util/pipe.o 00:02:43.386 CC lib/vfio_user/host/vfio_user.o 00:02:43.386 CC lib/util/strerror_tls.o 00:02:43.386 CC lib/util/string.o 00:02:43.386 CC lib/util/uuid.o 00:02:43.386 CC lib/util/xor.o 00:02:43.386 CC lib/util/zipf.o 00:02:43.386 CC lib/util/md5.o 00:02:43.386 LIB libspdk_vfio_user.a 00:02:43.386 SO libspdk_vfio_user.so.5.0 00:02:43.386 SYMLINK libspdk_vfio_user.so 00:02:43.386 LIB libspdk_util.a 00:02:43.386 SO libspdk_util.so.10.1 00:02:43.386 LIB libspdk_trace_parser.a 00:02:43.386 SO libspdk_trace_parser.so.6.0 00:02:43.386 SYMLINK libspdk_util.so 00:02:43.386 SYMLINK libspdk_trace_parser.so 00:02:43.386 CC lib/vmd/vmd.o 00:02:43.386 CC lib/vmd/led.o 00:02:43.386 CC lib/json/json_parse.o 00:02:43.386 CC lib/json/json_util.o 00:02:43.386 CC lib/json/json_write.o 00:02:43.386 CC lib/rdma_utils/rdma_utils.o 00:02:43.386 CC lib/conf/conf.o 00:02:43.386 CC lib/env_dpdk/env.o 00:02:43.386 CC lib/env_dpdk/memory.o 00:02:43.386 CC lib/idxd/idxd.o 00:02:43.386 CC lib/idxd/idxd_user.o 00:02:43.386 LIB libspdk_conf.a 00:02:43.386 CC lib/idxd/idxd_kernel.o 00:02:43.386 SO libspdk_conf.so.6.0 00:02:43.386 LIB libspdk_rdma_utils.a 00:02:43.386 SO libspdk_rdma_utils.so.1.0 00:02:43.386 SYMLINK libspdk_conf.so 00:02:43.386 CC lib/env_dpdk/pci.o 00:02:43.386 CC lib/env_dpdk/init.o 00:02:43.386 CC lib/env_dpdk/threads.o 00:02:43.386 SYMLINK libspdk_rdma_utils.so 00:02:43.386 CC lib/env_dpdk/pci_ioat.o 00:02:43.386 LIB libspdk_json.a 00:02:43.386 CC lib/env_dpdk/pci_virtio.o 00:02:43.386 SO libspdk_json.so.6.0 00:02:43.386 SYMLINK libspdk_json.so 00:02:43.386 CC lib/env_dpdk/pci_vmd.o 00:02:43.386 CC lib/env_dpdk/pci_idxd.o 00:02:43.386 CC lib/env_dpdk/pci_event.o 00:02:43.386 CC lib/env_dpdk/sigbus_handler.o 00:02:43.386 CC lib/env_dpdk/pci_dpdk.o 00:02:43.386 CC lib/rdma_provider/common.o 00:02:43.386 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.386 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.386 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:43.386 LIB libspdk_idxd.a 00:02:43.386 LIB libspdk_vmd.a 00:02:43.386 SO libspdk_idxd.so.12.1 00:02:43.386 SO libspdk_vmd.so.6.0 00:02:43.386 SYMLINK libspdk_idxd.so 00:02:43.386 SYMLINK libspdk_vmd.so 00:02:43.386 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.386 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.386 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.386 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:43.386 LIB libspdk_rdma_provider.a 00:02:43.386 SO libspdk_rdma_provider.so.7.0 00:02:43.386 SYMLINK libspdk_rdma_provider.so 00:02:43.386 LIB libspdk_jsonrpc.a 00:02:43.386 SO libspdk_jsonrpc.so.6.0 00:02:43.386 SYMLINK libspdk_jsonrpc.so 00:02:43.644 CC lib/rpc/rpc.o 00:02:43.644 LIB libspdk_env_dpdk.a 00:02:43.644 SO libspdk_env_dpdk.so.15.1 00:02:43.644 LIB libspdk_rpc.a 00:02:43.907 SO libspdk_rpc.so.6.0 00:02:43.907 SYMLINK libspdk_env_dpdk.so 00:02:43.907 SYMLINK libspdk_rpc.so 00:02:43.907 CC lib/keyring/keyring.o 00:02:43.907 CC lib/keyring/keyring_rpc.o 00:02:44.166 CC lib/notify/notify.o 00:02:44.166 CC lib/notify/notify_rpc.o 00:02:44.166 CC lib/trace/trace_flags.o 00:02:44.166 CC lib/trace/trace.o 00:02:44.166 CC lib/trace/trace_rpc.o 00:02:44.166 LIB libspdk_notify.a 00:02:44.166 SO libspdk_notify.so.6.0 00:02:44.166 SYMLINK libspdk_notify.so 00:02:44.166 LIB libspdk_keyring.a 00:02:44.166 LIB libspdk_trace.a 00:02:44.425 SO libspdk_keyring.so.2.0 00:02:44.425 SO libspdk_trace.so.11.0 00:02:44.425 SYMLINK libspdk_keyring.so 00:02:44.425 SYMLINK libspdk_trace.so 00:02:44.684 CC lib/thread/iobuf.o 00:02:44.684 CC lib/thread/thread.o 00:02:44.684 CC lib/sock/sock_rpc.o 00:02:44.684 CC lib/sock/sock.o 00:02:44.943 LIB libspdk_sock.a 00:02:44.943 SO libspdk_sock.so.10.0 00:02:45.202 SYMLINK libspdk_sock.so 00:02:45.460 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:45.460 CC lib/nvme/nvme_ctrlr.o 00:02:45.460 CC lib/nvme/nvme_ns.o 00:02:45.460 CC lib/nvme/nvme_pcie.o 00:02:45.460 CC lib/nvme/nvme_fabric.o 00:02:45.460 CC lib/nvme/nvme_qpair.o 00:02:45.460 CC lib/nvme/nvme_ns_cmd.o 00:02:45.460 CC lib/nvme/nvme.o 00:02:45.460 CC lib/nvme/nvme_pcie_common.o 00:02:46.027 CC lib/nvme/nvme_quirks.o 00:02:46.027 CC lib/nvme/nvme_transport.o 00:02:46.027 CC lib/nvme/nvme_discovery.o 00:02:46.027 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.027 LIB libspdk_thread.a 00:02:46.286 SO libspdk_thread.so.11.0 00:02:46.286 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.286 CC lib/nvme/nvme_tcp.o 00:02:46.286 CC lib/nvme/nvme_opal.o 00:02:46.286 SYMLINK libspdk_thread.so 00:02:46.286 CC lib/nvme/nvme_io_msg.o 00:02:46.286 CC lib/nvme/nvme_poll_group.o 00:02:46.546 CC lib/nvme/nvme_zns.o 00:02:46.546 CC lib/nvme/nvme_stubs.o 00:02:46.546 CC lib/nvme/nvme_auth.o 00:02:46.807 CC lib/nvme/nvme_cuse.o 00:02:46.807 CC lib/nvme/nvme_rdma.o 00:02:46.807 CC lib/accel/accel.o 00:02:47.065 CC lib/accel/accel_rpc.o 00:02:47.065 CC lib/blob/blobstore.o 00:02:47.065 CC lib/init/json_config.o 00:02:47.065 CC lib/virtio/virtio.o 00:02:47.327 CC lib/virtio/virtio_vhost_user.o 00:02:47.327 CC lib/init/subsystem.o 00:02:47.327 CC lib/blob/request.o 00:02:47.586 CC lib/virtio/virtio_vfio_user.o 00:02:47.586 CC lib/blob/zeroes.o 00:02:47.586 CC lib/init/subsystem_rpc.o 00:02:47.586 CC lib/blob/blob_bs_dev.o 00:02:47.586 CC lib/init/rpc.o 00:02:47.586 CC lib/virtio/virtio_pci.o 00:02:47.586 CC lib/accel/accel_sw.o 00:02:47.846 CC lib/fsdev/fsdev.o 00:02:47.846 CC lib/fsdev/fsdev_io.o 00:02:47.846 CC lib/fsdev/fsdev_rpc.o 00:02:47.846 LIB libspdk_init.a 00:02:47.846 SO libspdk_init.so.6.0 00:02:47.846 SYMLINK libspdk_init.so 00:02:48.109 LIB libspdk_virtio.a 00:02:48.109 SO libspdk_virtio.so.7.0 00:02:48.109 CC lib/event/app.o 00:02:48.109 CC lib/event/log_rpc.o 00:02:48.109 CC lib/event/reactor.o 00:02:48.109 CC lib/event/app_rpc.o 00:02:48.109 LIB libspdk_accel.a 00:02:48.109 SO libspdk_accel.so.16.0 00:02:48.109 SYMLINK libspdk_virtio.so 00:02:48.109 CC lib/event/scheduler_static.o 00:02:48.109 SYMLINK libspdk_accel.so 00:02:48.109 LIB libspdk_nvme.a 00:02:48.368 CC lib/bdev/bdev.o 00:02:48.368 CC lib/bdev/bdev_rpc.o 00:02:48.368 CC lib/bdev/bdev_zone.o 00:02:48.368 CC lib/bdev/part.o 00:02:48.368 CC lib/bdev/scsi_nvme.o 00:02:48.368 SO libspdk_nvme.so.15.0 00:02:48.368 LIB libspdk_fsdev.a 00:02:48.368 SO libspdk_fsdev.so.2.0 00:02:48.626 LIB libspdk_event.a 00:02:48.626 SYMLINK libspdk_fsdev.so 00:02:48.626 SO libspdk_event.so.14.0 00:02:48.626 SYMLINK libspdk_nvme.so 00:02:48.626 SYMLINK libspdk_event.so 00:02:48.626 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:49.557 LIB libspdk_fuse_dispatcher.a 00:02:49.557 SO libspdk_fuse_dispatcher.so.1.0 00:02:49.557 SYMLINK libspdk_fuse_dispatcher.so 00:02:50.490 LIB libspdk_bdev.a 00:02:50.490 LIB libspdk_blob.a 00:02:50.490 SO libspdk_bdev.so.17.0 00:02:50.490 SO libspdk_blob.so.11.0 00:02:50.747 SYMLINK libspdk_blob.so 00:02:50.747 SYMLINK libspdk_bdev.so 00:02:50.747 CC lib/ublk/ublk.o 00:02:50.747 CC lib/ublk/ublk_rpc.o 00:02:50.747 CC lib/nvmf/ctrlr.o 00:02:50.747 CC lib/nvmf/ctrlr_discovery.o 00:02:50.747 CC lib/nvmf/ctrlr_bdev.o 00:02:50.747 CC lib/blobfs/blobfs.o 00:02:50.747 CC lib/lvol/lvol.o 00:02:50.747 CC lib/nbd/nbd.o 00:02:50.747 CC lib/scsi/dev.o 00:02:50.747 CC lib/ftl/ftl_core.o 00:02:51.005 CC lib/nvmf/subsystem.o 00:02:51.005 CC lib/scsi/lun.o 00:02:51.264 CC lib/ftl/ftl_init.o 00:02:51.264 CC lib/nbd/nbd_rpc.o 00:02:51.264 LIB libspdk_ublk.a 00:02:51.264 CC lib/ftl/ftl_layout.o 00:02:51.264 CC lib/nvmf/nvmf.o 00:02:51.264 SO libspdk_ublk.so.3.0 00:02:51.264 CC lib/scsi/port.o 00:02:51.522 SYMLINK libspdk_ublk.so 00:02:51.522 CC lib/nvmf/nvmf_rpc.o 00:02:51.522 LIB libspdk_nbd.a 00:02:51.522 SO libspdk_nbd.so.7.0 00:02:51.522 CC lib/blobfs/tree.o 00:02:51.522 SYMLINK libspdk_nbd.so 00:02:51.522 CC lib/nvmf/transport.o 00:02:51.522 CC lib/scsi/scsi.o 00:02:51.522 CC lib/nvmf/tcp.o 00:02:51.522 LIB libspdk_blobfs.a 00:02:51.522 SO libspdk_blobfs.so.10.0 00:02:51.780 SYMLINK libspdk_blobfs.so 00:02:51.780 CC lib/nvmf/stubs.o 00:02:51.780 CC lib/ftl/ftl_debug.o 00:02:51.780 CC lib/scsi/scsi_bdev.o 00:02:51.780 LIB libspdk_lvol.a 00:02:51.780 SO libspdk_lvol.so.10.0 00:02:52.037 SYMLINK libspdk_lvol.so 00:02:52.037 CC lib/nvmf/mdns_server.o 00:02:52.037 CC lib/ftl/ftl_io.o 00:02:52.037 CC lib/nvmf/rdma.o 00:02:52.037 CC lib/nvmf/auth.o 00:02:52.037 CC lib/ftl/ftl_sb.o 00:02:52.323 CC lib/ftl/ftl_l2p.o 00:02:52.323 CC lib/scsi/scsi_pr.o 00:02:52.323 CC lib/scsi/scsi_rpc.o 00:02:52.323 CC lib/ftl/ftl_l2p_flat.o 00:02:52.323 CC lib/scsi/task.o 00:02:52.323 CC lib/ftl/ftl_nv_cache.o 00:02:52.323 CC lib/ftl/ftl_band.o 00:02:52.323 CC lib/ftl/ftl_band_ops.o 00:02:52.323 CC lib/ftl/ftl_writer.o 00:02:52.581 CC lib/ftl/ftl_rq.o 00:02:52.581 LIB libspdk_scsi.a 00:02:52.581 CC lib/ftl/ftl_reloc.o 00:02:52.581 SO libspdk_scsi.so.9.0 00:02:52.581 CC lib/ftl/ftl_l2p_cache.o 00:02:52.839 CC lib/ftl/ftl_p2l.o 00:02:52.839 SYMLINK libspdk_scsi.so 00:02:52.839 CC lib/ftl/ftl_p2l_log.o 00:02:52.839 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.839 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:52.839 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.097 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.097 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.097 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.097 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.097 CC lib/iscsi/conn.o 00:02:53.097 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.097 CC lib/vhost/vhost.o 00:02:53.097 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.355 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.355 CC lib/iscsi/init_grp.o 00:02:53.355 CC lib/iscsi/iscsi.o 00:02:53.355 CC lib/iscsi/param.o 00:02:53.355 CC lib/iscsi/portal_grp.o 00:02:53.355 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.355 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.613 CC lib/iscsi/tgt_node.o 00:02:53.614 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.614 CC lib/iscsi/iscsi_subsystem.o 00:02:53.614 CC lib/iscsi/iscsi_rpc.o 00:02:53.614 CC lib/iscsi/task.o 00:02:53.614 LIB libspdk_nvmf.a 00:02:53.614 CC lib/ftl/utils/ftl_conf.o 00:02:53.614 CC lib/ftl/utils/ftl_md.o 00:02:53.871 SO libspdk_nvmf.so.20.0 00:02:53.871 CC lib/vhost/vhost_rpc.o 00:02:53.871 CC lib/vhost/vhost_scsi.o 00:02:53.871 CC lib/vhost/vhost_blk.o 00:02:53.871 CC lib/vhost/rte_vhost_user.o 00:02:53.871 SYMLINK libspdk_nvmf.so 00:02:53.871 CC lib/ftl/utils/ftl_mempool.o 00:02:53.871 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.871 CC lib/ftl/utils/ftl_property.o 00:02:54.127 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:54.127 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.127 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.127 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.127 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.127 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.127 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.127 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.127 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.127 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.384 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.384 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.384 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.384 CC lib/ftl/base/ftl_base_dev.o 00:02:54.384 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.384 CC lib/ftl/ftl_trace.o 00:02:54.642 LIB libspdk_iscsi.a 00:02:54.642 SO libspdk_iscsi.so.8.0 00:02:54.642 LIB libspdk_ftl.a 00:02:54.642 LIB libspdk_vhost.a 00:02:54.642 SYMLINK libspdk_iscsi.so 00:02:54.642 SO libspdk_vhost.so.8.0 00:02:54.901 SO libspdk_ftl.so.9.0 00:02:54.901 SYMLINK libspdk_vhost.so 00:02:54.901 SYMLINK libspdk_ftl.so 00:02:55.159 CC module/env_dpdk/env_dpdk_rpc.o 00:02:55.417 CC module/accel/ioat/accel_ioat.o 00:02:55.417 CC module/blob/bdev/blob_bdev.o 00:02:55.417 CC module/accel/iaa/accel_iaa.o 00:02:55.417 CC module/accel/dsa/accel_dsa.o 00:02:55.417 CC module/fsdev/aio/fsdev_aio.o 00:02:55.417 CC module/keyring/file/keyring.o 00:02:55.417 CC module/accel/error/accel_error.o 00:02:55.417 CC module/sock/posix/posix.o 00:02:55.417 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:55.417 LIB libspdk_env_dpdk_rpc.a 00:02:55.417 SO libspdk_env_dpdk_rpc.so.6.0 00:02:55.417 SYMLINK libspdk_env_dpdk_rpc.so 00:02:55.417 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:55.417 CC module/accel/ioat/accel_ioat_rpc.o 00:02:55.417 CC module/keyring/file/keyring_rpc.o 00:02:55.417 LIB libspdk_scheduler_dynamic.a 00:02:55.417 SO libspdk_scheduler_dynamic.so.4.0 00:02:55.417 CC module/accel/iaa/accel_iaa_rpc.o 00:02:55.417 CC module/accel/error/accel_error_rpc.o 00:02:55.674 SYMLINK libspdk_scheduler_dynamic.so 00:02:55.674 CC module/fsdev/aio/linux_aio_mgr.o 00:02:55.674 LIB libspdk_blob_bdev.a 00:02:55.674 LIB libspdk_accel_ioat.a 00:02:55.674 SO libspdk_blob_bdev.so.11.0 00:02:55.674 SO libspdk_accel_ioat.so.6.0 00:02:55.674 LIB libspdk_keyring_file.a 00:02:55.674 SO libspdk_keyring_file.so.2.0 00:02:55.674 CC module/accel/dsa/accel_dsa_rpc.o 00:02:55.674 SYMLINK libspdk_blob_bdev.so 00:02:55.674 LIB libspdk_accel_error.a 00:02:55.674 SYMLINK libspdk_accel_ioat.so 00:02:55.674 LIB libspdk_accel_iaa.a 00:02:55.674 SO libspdk_accel_error.so.2.0 00:02:55.674 SYMLINK libspdk_keyring_file.so 00:02:55.674 SO libspdk_accel_iaa.so.3.0 00:02:55.674 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.674 SYMLINK libspdk_accel_error.so 00:02:55.674 SYMLINK libspdk_accel_iaa.so 00:02:55.674 LIB libspdk_accel_dsa.a 00:02:55.674 SO libspdk_accel_dsa.so.5.0 00:02:55.674 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.933 CC module/keyring/linux/keyring.o 00:02:55.933 SYMLINK libspdk_accel_dsa.so 00:02:55.933 LIB libspdk_scheduler_dpdk_governor.a 00:02:55.933 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:55.933 LIB libspdk_scheduler_gscheduler.a 00:02:55.933 CC module/bdev/delay/vbdev_delay.o 00:02:55.933 CC module/bdev/gpt/gpt.o 00:02:55.933 CC module/bdev/error/vbdev_error.o 00:02:55.933 SO libspdk_scheduler_gscheduler.so.4.0 00:02:55.933 CC module/keyring/linux/keyring_rpc.o 00:02:55.933 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:55.933 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.933 CC module/blobfs/bdev/blobfs_bdev.o 00:02:55.933 SYMLINK libspdk_scheduler_gscheduler.so 00:02:55.933 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.933 CC module/bdev/lvol/vbdev_lvol.o 00:02:55.933 LIB libspdk_keyring_linux.a 00:02:56.191 SO libspdk_keyring_linux.so.1.0 00:02:56.191 LIB libspdk_fsdev_aio.a 00:02:56.191 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:56.191 SO libspdk_fsdev_aio.so.1.0 00:02:56.191 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:56.191 SYMLINK libspdk_keyring_linux.so 00:02:56.191 LIB libspdk_sock_posix.a 00:02:56.191 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:56.191 SYMLINK libspdk_fsdev_aio.so 00:02:56.191 SO libspdk_sock_posix.so.6.0 00:02:56.191 LIB libspdk_bdev_error.a 00:02:56.191 SO libspdk_bdev_error.so.6.0 00:02:56.191 LIB libspdk_blobfs_bdev.a 00:02:56.191 LIB libspdk_bdev_gpt.a 00:02:56.191 SYMLINK libspdk_sock_posix.so 00:02:56.191 SYMLINK libspdk_bdev_error.so 00:02:56.191 SO libspdk_blobfs_bdev.so.6.0 00:02:56.191 CC module/bdev/malloc/bdev_malloc.o 00:02:56.191 SO libspdk_bdev_gpt.so.6.0 00:02:56.450 LIB libspdk_bdev_delay.a 00:02:56.450 CC module/bdev/null/bdev_null.o 00:02:56.450 SYMLINK libspdk_blobfs_bdev.so 00:02:56.450 CC module/bdev/null/bdev_null_rpc.o 00:02:56.450 SO libspdk_bdev_delay.so.6.0 00:02:56.450 SYMLINK libspdk_bdev_gpt.so 00:02:56.450 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:56.450 CC module/bdev/nvme/bdev_nvme.o 00:02:56.450 SYMLINK libspdk_bdev_delay.so 00:02:56.450 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:56.450 CC module/bdev/passthru/vbdev_passthru.o 00:02:56.450 CC module/bdev/raid/bdev_raid.o 00:02:56.450 CC module/bdev/raid/bdev_raid_rpc.o 00:02:56.450 LIB libspdk_bdev_lvol.a 00:02:56.450 SO libspdk_bdev_lvol.so.6.0 00:02:56.450 CC module/bdev/nvme/nvme_rpc.o 00:02:56.450 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.450 SYMLINK libspdk_bdev_lvol.so 00:02:56.450 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.708 LIB libspdk_bdev_null.a 00:02:56.708 SO libspdk_bdev_null.so.6.0 00:02:56.708 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:56.708 CC module/bdev/raid/raid0.o 00:02:56.708 LIB libspdk_bdev_malloc.a 00:02:56.708 SYMLINK libspdk_bdev_null.so 00:02:56.708 CC module/bdev/raid/raid1.o 00:02:56.708 SO libspdk_bdev_malloc.so.6.0 00:02:56.708 SYMLINK libspdk_bdev_malloc.so 00:02:56.708 LIB libspdk_bdev_passthru.a 00:02:56.708 SO libspdk_bdev_passthru.so.6.0 00:02:56.708 CC module/bdev/split/vbdev_split.o 00:02:56.966 CC module/bdev/raid/concat.o 00:02:56.966 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:56.966 SYMLINK libspdk_bdev_passthru.so 00:02:56.966 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.966 CC module/bdev/xnvme/bdev_xnvme.o 00:02:56.966 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.966 CC module/bdev/nvme/vbdev_opal.o 00:02:56.966 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.966 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:56.966 CC module/bdev/aio/bdev_aio.o 00:02:56.966 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.224 LIB libspdk_bdev_split.a 00:02:57.224 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.224 SO libspdk_bdev_split.so.6.0 00:02:57.224 LIB libspdk_bdev_zone_block.a 00:02:57.224 SO libspdk_bdev_zone_block.so.6.0 00:02:57.224 SYMLINK libspdk_bdev_split.so 00:02:57.224 SYMLINK libspdk_bdev_zone_block.so 00:02:57.224 LIB libspdk_bdev_xnvme.a 00:02:57.224 SO libspdk_bdev_xnvme.so.3.0 00:02:57.224 SYMLINK libspdk_bdev_xnvme.so 00:02:57.224 CC module/bdev/ftl/bdev_ftl.o 00:02:57.224 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.224 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.224 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.225 LIB libspdk_bdev_raid.a 00:02:57.225 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.225 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.225 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.483 SO libspdk_bdev_raid.so.6.0 00:02:57.483 LIB libspdk_bdev_aio.a 00:02:57.483 SO libspdk_bdev_aio.so.6.0 00:02:57.483 SYMLINK libspdk_bdev_raid.so 00:02:57.483 SYMLINK libspdk_bdev_aio.so 00:02:57.483 LIB libspdk_bdev_ftl.a 00:02:57.740 SO libspdk_bdev_ftl.so.6.0 00:02:57.740 SYMLINK libspdk_bdev_ftl.so 00:02:57.740 LIB libspdk_bdev_iscsi.a 00:02:57.740 SO libspdk_bdev_iscsi.so.6.0 00:02:57.740 LIB libspdk_bdev_virtio.a 00:02:57.740 SO libspdk_bdev_virtio.so.6.0 00:02:57.740 SYMLINK libspdk_bdev_iscsi.so 00:02:57.999 SYMLINK libspdk_bdev_virtio.so 00:02:58.934 LIB libspdk_bdev_nvme.a 00:02:58.934 SO libspdk_bdev_nvme.so.7.1 00:02:59.192 SYMLINK libspdk_bdev_nvme.so 00:02:59.450 CC module/event/subsystems/keyring/keyring.o 00:02:59.450 CC module/event/subsystems/sock/sock.o 00:02:59.450 CC module/event/subsystems/vmd/vmd.o 00:02:59.450 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:59.450 CC module/event/subsystems/scheduler/scheduler.o 00:02:59.450 CC module/event/subsystems/iobuf/iobuf.o 00:02:59.450 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:59.450 CC module/event/subsystems/fsdev/fsdev.o 00:02:59.450 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:59.707 LIB libspdk_event_scheduler.a 00:02:59.707 LIB libspdk_event_sock.a 00:02:59.707 LIB libspdk_event_fsdev.a 00:02:59.707 SO libspdk_event_scheduler.so.4.0 00:02:59.707 SO libspdk_event_sock.so.5.0 00:02:59.707 LIB libspdk_event_keyring.a 00:02:59.707 LIB libspdk_event_vmd.a 00:02:59.707 LIB libspdk_event_vhost_blk.a 00:02:59.707 SO libspdk_event_fsdev.so.1.0 00:02:59.707 SO libspdk_event_keyring.so.1.0 00:02:59.707 SO libspdk_event_vhost_blk.so.3.0 00:02:59.707 SO libspdk_event_vmd.so.6.0 00:02:59.707 SYMLINK libspdk_event_sock.so 00:02:59.707 SYMLINK libspdk_event_scheduler.so 00:02:59.707 LIB libspdk_event_iobuf.a 00:02:59.707 SYMLINK libspdk_event_fsdev.so 00:02:59.707 SYMLINK libspdk_event_keyring.so 00:02:59.707 SYMLINK libspdk_event_vhost_blk.so 00:02:59.707 SYMLINK libspdk_event_vmd.so 00:02:59.707 SO libspdk_event_iobuf.so.3.0 00:02:59.707 SYMLINK libspdk_event_iobuf.so 00:02:59.965 CC module/event/subsystems/accel/accel.o 00:03:00.236 LIB libspdk_event_accel.a 00:03:00.236 SO libspdk_event_accel.so.6.0 00:03:00.236 SYMLINK libspdk_event_accel.so 00:03:00.511 CC module/event/subsystems/bdev/bdev.o 00:03:00.511 LIB libspdk_event_bdev.a 00:03:00.768 SO libspdk_event_bdev.so.6.0 00:03:00.768 SYMLINK libspdk_event_bdev.so 00:03:00.768 CC module/event/subsystems/ublk/ublk.o 00:03:00.768 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.768 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.768 CC module/event/subsystems/nbd/nbd.o 00:03:00.768 CC module/event/subsystems/scsi/scsi.o 00:03:01.027 LIB libspdk_event_ublk.a 00:03:01.027 LIB libspdk_event_scsi.a 00:03:01.027 SO libspdk_event_ublk.so.3.0 00:03:01.027 LIB libspdk_event_nbd.a 00:03:01.027 SO libspdk_event_scsi.so.6.0 00:03:01.027 SO libspdk_event_nbd.so.6.0 00:03:01.027 LIB libspdk_event_nvmf.a 00:03:01.027 SYMLINK libspdk_event_ublk.so 00:03:01.027 SYMLINK libspdk_event_scsi.so 00:03:01.027 SO libspdk_event_nvmf.so.6.0 00:03:01.027 SYMLINK libspdk_event_nbd.so 00:03:01.027 SYMLINK libspdk_event_nvmf.so 00:03:01.284 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.284 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.284 LIB libspdk_event_vhost_scsi.a 00:03:01.284 LIB libspdk_event_iscsi.a 00:03:01.542 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.542 SO libspdk_event_iscsi.so.6.0 00:03:01.542 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.542 SYMLINK libspdk_event_iscsi.so 00:03:01.542 SO libspdk.so.6.0 00:03:01.542 SYMLINK libspdk.so 00:03:01.800 CXX app/trace/trace.o 00:03:01.800 CC app/trace_record/trace_record.o 00:03:01.800 CC app/spdk_nvme_identify/identify.o 00:03:01.800 CC app/spdk_lspci/spdk_lspci.o 00:03:01.800 CC app/spdk_nvme_perf/perf.o 00:03:01.800 CC app/iscsi_tgt/iscsi_tgt.o 00:03:01.800 CC app/nvmf_tgt/nvmf_main.o 00:03:01.800 CC app/spdk_tgt/spdk_tgt.o 00:03:01.800 CC examples/util/zipf/zipf.o 00:03:01.800 CC test/thread/poller_perf/poller_perf.o 00:03:01.800 LINK spdk_lspci 00:03:02.058 LINK iscsi_tgt 00:03:02.058 LINK zipf 00:03:02.058 LINK poller_perf 00:03:02.058 LINK nvmf_tgt 00:03:02.058 LINK spdk_tgt 00:03:02.058 LINK spdk_trace_record 00:03:02.058 LINK spdk_trace 00:03:02.058 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.317 CC app/spdk_top/spdk_top.o 00:03:02.317 CC examples/ioat/perf/perf.o 00:03:02.317 TEST_HEADER include/spdk/accel.h 00:03:02.317 TEST_HEADER include/spdk/accel_module.h 00:03:02.317 TEST_HEADER include/spdk/assert.h 00:03:02.317 CC test/dma/test_dma/test_dma.o 00:03:02.317 TEST_HEADER include/spdk/barrier.h 00:03:02.317 TEST_HEADER include/spdk/base64.h 00:03:02.317 TEST_HEADER include/spdk/bdev.h 00:03:02.317 TEST_HEADER include/spdk/bdev_module.h 00:03:02.317 TEST_HEADER include/spdk/bdev_zone.h 00:03:02.317 TEST_HEADER include/spdk/bit_array.h 00:03:02.317 TEST_HEADER include/spdk/bit_pool.h 00:03:02.317 TEST_HEADER include/spdk/blob_bdev.h 00:03:02.317 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:02.317 TEST_HEADER include/spdk/blobfs.h 00:03:02.317 TEST_HEADER include/spdk/blob.h 00:03:02.317 TEST_HEADER include/spdk/conf.h 00:03:02.317 TEST_HEADER include/spdk/config.h 00:03:02.317 TEST_HEADER include/spdk/cpuset.h 00:03:02.317 TEST_HEADER include/spdk/crc16.h 00:03:02.317 TEST_HEADER include/spdk/crc32.h 00:03:02.317 TEST_HEADER include/spdk/crc64.h 00:03:02.317 TEST_HEADER include/spdk/dif.h 00:03:02.317 LINK spdk_nvme_discover 00:03:02.317 TEST_HEADER include/spdk/dma.h 00:03:02.317 TEST_HEADER include/spdk/endian.h 00:03:02.317 TEST_HEADER include/spdk/env_dpdk.h 00:03:02.317 TEST_HEADER include/spdk/env.h 00:03:02.317 TEST_HEADER include/spdk/event.h 00:03:02.317 TEST_HEADER include/spdk/fd_group.h 00:03:02.317 TEST_HEADER include/spdk/fd.h 00:03:02.317 TEST_HEADER include/spdk/file.h 00:03:02.317 TEST_HEADER include/spdk/fsdev.h 00:03:02.317 TEST_HEADER include/spdk/fsdev_module.h 00:03:02.317 TEST_HEADER include/spdk/ftl.h 00:03:02.317 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:02.317 TEST_HEADER include/spdk/gpt_spec.h 00:03:02.317 TEST_HEADER include/spdk/hexlify.h 00:03:02.317 TEST_HEADER include/spdk/histogram_data.h 00:03:02.317 TEST_HEADER include/spdk/idxd.h 00:03:02.317 TEST_HEADER include/spdk/idxd_spec.h 00:03:02.317 TEST_HEADER include/spdk/init.h 00:03:02.317 CC test/app/bdev_svc/bdev_svc.o 00:03:02.317 TEST_HEADER include/spdk/ioat.h 00:03:02.317 TEST_HEADER include/spdk/ioat_spec.h 00:03:02.317 TEST_HEADER include/spdk/iscsi_spec.h 00:03:02.317 CC test/event/event_perf/event_perf.o 00:03:02.317 TEST_HEADER include/spdk/json.h 00:03:02.317 TEST_HEADER include/spdk/jsonrpc.h 00:03:02.317 TEST_HEADER include/spdk/keyring.h 00:03:02.317 TEST_HEADER include/spdk/keyring_module.h 00:03:02.317 TEST_HEADER include/spdk/likely.h 00:03:02.317 TEST_HEADER include/spdk/log.h 00:03:02.317 TEST_HEADER include/spdk/lvol.h 00:03:02.317 TEST_HEADER include/spdk/md5.h 00:03:02.317 TEST_HEADER include/spdk/memory.h 00:03:02.317 TEST_HEADER include/spdk/mmio.h 00:03:02.317 TEST_HEADER include/spdk/nbd.h 00:03:02.317 TEST_HEADER include/spdk/net.h 00:03:02.317 TEST_HEADER include/spdk/notify.h 00:03:02.317 TEST_HEADER include/spdk/nvme.h 00:03:02.317 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.317 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.317 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.317 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.317 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.317 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.317 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.317 TEST_HEADER include/spdk/nvmf.h 00:03:02.317 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.317 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.317 TEST_HEADER include/spdk/opal.h 00:03:02.317 TEST_HEADER include/spdk/opal_spec.h 00:03:02.317 TEST_HEADER include/spdk/pci_ids.h 00:03:02.317 TEST_HEADER include/spdk/pipe.h 00:03:02.317 TEST_HEADER include/spdk/queue.h 00:03:02.317 TEST_HEADER include/spdk/reduce.h 00:03:02.317 TEST_HEADER include/spdk/rpc.h 00:03:02.317 LINK ioat_perf 00:03:02.317 CC test/env/mem_callbacks/mem_callbacks.o 00:03:02.317 TEST_HEADER include/spdk/scheduler.h 00:03:02.317 TEST_HEADER include/spdk/scsi.h 00:03:02.317 TEST_HEADER include/spdk/scsi_spec.h 00:03:02.317 TEST_HEADER include/spdk/sock.h 00:03:02.317 TEST_HEADER include/spdk/stdinc.h 00:03:02.317 TEST_HEADER include/spdk/string.h 00:03:02.317 TEST_HEADER include/spdk/thread.h 00:03:02.318 TEST_HEADER include/spdk/trace.h 00:03:02.318 TEST_HEADER include/spdk/trace_parser.h 00:03:02.318 TEST_HEADER include/spdk/tree.h 00:03:02.318 TEST_HEADER include/spdk/ublk.h 00:03:02.318 TEST_HEADER include/spdk/util.h 00:03:02.318 TEST_HEADER include/spdk/uuid.h 00:03:02.576 TEST_HEADER include/spdk/version.h 00:03:02.576 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.576 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.576 TEST_HEADER include/spdk/vhost.h 00:03:02.576 TEST_HEADER include/spdk/vmd.h 00:03:02.576 TEST_HEADER include/spdk/xor.h 00:03:02.576 TEST_HEADER include/spdk/zipf.h 00:03:02.576 CXX test/cpp_headers/accel.o 00:03:02.576 LINK spdk_nvme_identify 00:03:02.576 CXX test/cpp_headers/accel_module.o 00:03:02.576 LINK spdk_nvme_perf 00:03:02.576 LINK event_perf 00:03:02.576 LINK bdev_svc 00:03:02.576 CC examples/ioat/verify/verify.o 00:03:02.576 CXX test/cpp_headers/assert.o 00:03:02.576 CXX test/cpp_headers/barrier.o 00:03:02.576 CXX test/cpp_headers/base64.o 00:03:02.576 CC test/rpc_client/rpc_client_test.o 00:03:02.576 LINK test_dma 00:03:02.834 CC test/event/reactor/reactor.o 00:03:02.834 LINK verify 00:03:02.834 CXX test/cpp_headers/bdev.o 00:03:02.834 LINK rpc_client_test 00:03:02.834 CC test/env/vtophys/vtophys.o 00:03:02.834 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.834 LINK reactor 00:03:02.834 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.834 LINK mem_callbacks 00:03:02.834 CXX test/cpp_headers/bdev_module.o 00:03:03.092 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:03.092 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:03.092 LINK vtophys 00:03:03.092 LINK env_dpdk_post_init 00:03:03.092 CC test/event/reactor_perf/reactor_perf.o 00:03:03.092 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.092 LINK spdk_top 00:03:03.092 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:03.092 CC examples/vmd/led/led.o 00:03:03.092 CXX test/cpp_headers/bdev_zone.o 00:03:03.092 LINK lsvmd 00:03:03.092 LINK reactor_perf 00:03:03.351 LINK led 00:03:03.351 CC test/env/memory/memory_ut.o 00:03:03.351 CC app/spdk_dd/spdk_dd.o 00:03:03.351 CXX test/cpp_headers/bit_array.o 00:03:03.351 LINK nvme_fuzz 00:03:03.351 CXX test/cpp_headers/bit_pool.o 00:03:03.351 CC app/fio/nvme/fio_plugin.o 00:03:03.351 CC test/event/app_repeat/app_repeat.o 00:03:03.351 CXX test/cpp_headers/blob_bdev.o 00:03:03.609 CC examples/idxd/perf/perf.o 00:03:03.609 LINK vhost_fuzz 00:03:03.609 CC app/fio/bdev/fio_plugin.o 00:03:03.609 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.609 LINK app_repeat 00:03:03.609 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.609 LINK spdk_dd 00:03:03.609 CXX test/cpp_headers/blobfs.o 00:03:03.609 LINK interrupt_tgt 00:03:03.868 CXX test/cpp_headers/blob.o 00:03:03.868 LINK idxd_perf 00:03:03.868 CC test/app/histogram_perf/histogram_perf.o 00:03:03.868 CC test/event/scheduler/scheduler.o 00:03:03.868 LINK spdk_nvme 00:03:03.868 CXX test/cpp_headers/conf.o 00:03:03.868 CC test/app/jsoncat/jsoncat.o 00:03:03.868 CC test/env/pci/pci_ut.o 00:03:03.868 LINK spdk_bdev 00:03:03.868 LINK histogram_perf 00:03:04.126 LINK scheduler 00:03:04.126 LINK jsoncat 00:03:04.126 CXX test/cpp_headers/config.o 00:03:04.126 CXX test/cpp_headers/cpuset.o 00:03:04.126 CC examples/thread/thread/thread_ex.o 00:03:04.126 CC examples/sock/hello_world/hello_sock.o 00:03:04.126 LINK memory_ut 00:03:04.126 CXX test/cpp_headers/crc16.o 00:03:04.126 CC app/vhost/vhost.o 00:03:04.126 CC test/accel/dif/dif.o 00:03:04.384 CXX test/cpp_headers/crc32.o 00:03:04.384 LINK pci_ut 00:03:04.384 CXX test/cpp_headers/crc64.o 00:03:04.384 CC test/app/stub/stub.o 00:03:04.384 LINK vhost 00:03:04.384 LINK thread 00:03:04.384 LINK hello_sock 00:03:04.384 CC test/blobfs/mkfs/mkfs.o 00:03:04.384 CXX test/cpp_headers/dif.o 00:03:04.384 LINK stub 00:03:04.384 CXX test/cpp_headers/dma.o 00:03:04.384 CXX test/cpp_headers/endian.o 00:03:04.642 LINK mkfs 00:03:04.642 CC test/lvol/esnap/esnap.o 00:03:04.642 CC test/nvme/aer/aer.o 00:03:04.642 CC examples/nvme/hello_world/hello_world.o 00:03:04.642 CXX test/cpp_headers/env_dpdk.o 00:03:04.642 CC examples/nvme/reconnect/reconnect.o 00:03:04.642 CXX test/cpp_headers/env.o 00:03:04.642 CC test/nvme/reset/reset.o 00:03:04.642 LINK iscsi_fuzz 00:03:04.642 CC test/nvme/sgl/sgl.o 00:03:04.900 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:04.900 CXX test/cpp_headers/event.o 00:03:04.900 LINK dif 00:03:04.900 LINK hello_world 00:03:04.900 CXX test/cpp_headers/fd_group.o 00:03:04.900 LINK aer 00:03:04.900 LINK reset 00:03:04.900 LINK sgl 00:03:05.159 LINK reconnect 00:03:05.159 CXX test/cpp_headers/fd.o 00:03:05.159 CC examples/nvme/hotplug/hotplug.o 00:03:05.159 CC examples/nvme/arbitration/arbitration.o 00:03:05.159 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:05.159 CC test/nvme/e2edp/nvme_dp.o 00:03:05.159 CC examples/nvme/abort/abort.o 00:03:05.159 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:05.159 CXX test/cpp_headers/file.o 00:03:05.159 LINK cmb_copy 00:03:05.418 LINK pmr_persistence 00:03:05.418 LINK hotplug 00:03:05.418 CXX test/cpp_headers/fsdev.o 00:03:05.418 CC test/bdev/bdevio/bdevio.o 00:03:05.418 LINK nvme_manage 00:03:05.418 CXX test/cpp_headers/fsdev_module.o 00:03:05.418 CXX test/cpp_headers/ftl.o 00:03:05.418 LINK nvme_dp 00:03:05.418 LINK arbitration 00:03:05.418 CXX test/cpp_headers/fuse_dispatcher.o 00:03:05.418 CXX test/cpp_headers/gpt_spec.o 00:03:05.418 CXX test/cpp_headers/hexlify.o 00:03:05.418 LINK abort 00:03:05.418 CXX test/cpp_headers/histogram_data.o 00:03:05.676 CC test/nvme/overhead/overhead.o 00:03:05.676 CXX test/cpp_headers/idxd.o 00:03:05.676 CXX test/cpp_headers/idxd_spec.o 00:03:05.676 CXX test/cpp_headers/init.o 00:03:05.676 CXX test/cpp_headers/ioat.o 00:03:05.676 CXX test/cpp_headers/ioat_spec.o 00:03:05.676 CC examples/accel/perf/accel_perf.o 00:03:05.676 CXX test/cpp_headers/iscsi_spec.o 00:03:05.676 LINK bdevio 00:03:05.676 CXX test/cpp_headers/json.o 00:03:05.676 CC test/nvme/err_injection/err_injection.o 00:03:05.676 CXX test/cpp_headers/jsonrpc.o 00:03:05.935 CXX test/cpp_headers/keyring.o 00:03:05.935 LINK overhead 00:03:05.935 CXX test/cpp_headers/keyring_module.o 00:03:05.935 CC test/nvme/startup/startup.o 00:03:05.935 LINK err_injection 00:03:05.935 CXX test/cpp_headers/likely.o 00:03:05.935 CC test/nvme/reserve/reserve.o 00:03:05.935 CC examples/blob/hello_world/hello_blob.o 00:03:05.935 CXX test/cpp_headers/log.o 00:03:06.193 CXX test/cpp_headers/lvol.o 00:03:06.193 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.193 LINK startup 00:03:06.193 CC test/nvme/simple_copy/simple_copy.o 00:03:06.193 CXX test/cpp_headers/md5.o 00:03:06.193 LINK accel_perf 00:03:06.193 CXX test/cpp_headers/memory.o 00:03:06.193 LINK hello_blob 00:03:06.193 CXX test/cpp_headers/mmio.o 00:03:06.193 LINK reserve 00:03:06.193 LINK simple_copy 00:03:06.511 CC test/nvme/connect_stress/connect_stress.o 00:03:06.511 CC examples/blob/cli/blobcli.o 00:03:06.511 CXX test/cpp_headers/nbd.o 00:03:06.511 LINK hello_fsdev 00:03:06.511 CXX test/cpp_headers/net.o 00:03:06.511 CXX test/cpp_headers/notify.o 00:03:06.511 CC test/nvme/boot_partition/boot_partition.o 00:03:06.511 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.511 CXX test/cpp_headers/nvme.o 00:03:06.511 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.511 LINK connect_stress 00:03:06.511 CXX test/cpp_headers/nvme_intel.o 00:03:06.511 CC test/nvme/compliance/nvme_compliance.o 00:03:06.511 LINK boot_partition 00:03:06.511 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.788 CXX test/cpp_headers/nvme_ocssd.o 00:03:06.788 LINK hello_bdev 00:03:06.788 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:06.788 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.788 LINK blobcli 00:03:06.788 LINK fused_ordering 00:03:06.788 CXX test/cpp_headers/nvme_spec.o 00:03:06.788 CC test/nvme/fdp/fdp.o 00:03:06.788 CXX test/cpp_headers/nvme_zns.o 00:03:06.788 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.788 LINK nvme_compliance 00:03:06.788 LINK doorbell_aers 00:03:06.788 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.788 CXX test/cpp_headers/nvmf.o 00:03:07.046 CXX test/cpp_headers/nvmf_spec.o 00:03:07.046 CXX test/cpp_headers/nvmf_transport.o 00:03:07.046 CC test/nvme/cuse/cuse.o 00:03:07.046 CXX test/cpp_headers/opal.o 00:03:07.046 CXX test/cpp_headers/opal_spec.o 00:03:07.046 CXX test/cpp_headers/pci_ids.o 00:03:07.046 CXX test/cpp_headers/pipe.o 00:03:07.046 CXX test/cpp_headers/queue.o 00:03:07.046 CXX test/cpp_headers/reduce.o 00:03:07.046 LINK fdp 00:03:07.046 CXX test/cpp_headers/rpc.o 00:03:07.046 CXX test/cpp_headers/scheduler.o 00:03:07.046 CXX test/cpp_headers/scsi.o 00:03:07.046 CXX test/cpp_headers/scsi_spec.o 00:03:07.304 CXX test/cpp_headers/sock.o 00:03:07.304 CXX test/cpp_headers/stdinc.o 00:03:07.304 CXX test/cpp_headers/string.o 00:03:07.304 CXX test/cpp_headers/thread.o 00:03:07.304 CXX test/cpp_headers/trace.o 00:03:07.304 CXX test/cpp_headers/trace_parser.o 00:03:07.304 LINK bdevperf 00:03:07.304 CXX test/cpp_headers/tree.o 00:03:07.304 CXX test/cpp_headers/ublk.o 00:03:07.304 CXX test/cpp_headers/util.o 00:03:07.304 CXX test/cpp_headers/uuid.o 00:03:07.304 CXX test/cpp_headers/version.o 00:03:07.304 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.304 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.304 CXX test/cpp_headers/vmd.o 00:03:07.304 CXX test/cpp_headers/vhost.o 00:03:07.561 CXX test/cpp_headers/xor.o 00:03:07.562 CXX test/cpp_headers/zipf.o 00:03:07.819 CC examples/nvmf/nvmf/nvmf.o 00:03:08.078 LINK nvmf 00:03:08.078 LINK cuse 00:03:09.452 LINK esnap 00:03:09.452 00:03:09.452 real 1m9.468s 00:03:09.452 user 6m26.538s 00:03:09.452 sys 1m12.315s 00:03:09.452 12:35:34 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.452 12:35:34 make -- common/autotest_common.sh@10 -- $ set +x 00:03:09.452 ************************************ 00:03:09.452 END TEST make 00:03:09.452 ************************************ 00:03:09.711 12:35:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:09.711 12:35:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:09.711 12:35:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:09.711 12:35:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.711 12:35:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:09.711 12:35:34 -- pm/common@44 -- $ pid=5062 00:03:09.711 12:35:34 -- pm/common@50 -- $ kill -TERM 5062 00:03:09.711 12:35:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.711 12:35:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:09.711 12:35:34 -- pm/common@44 -- $ pid=5063 00:03:09.711 12:35:34 -- pm/common@50 -- $ kill -TERM 5063 00:03:09.711 12:35:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:09.711 12:35:34 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:09.711 12:35:35 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:09.711 12:35:35 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:09.711 12:35:35 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:09.711 12:35:35 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:09.711 12:35:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:09.711 12:35:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:09.711 12:35:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:09.711 12:35:35 -- scripts/common.sh@336 -- # IFS=.-: 00:03:09.711 12:35:35 -- scripts/common.sh@336 -- # read -ra ver1 00:03:09.711 12:35:35 -- scripts/common.sh@337 -- # IFS=.-: 00:03:09.711 12:35:35 -- scripts/common.sh@337 -- # read -ra ver2 00:03:09.711 12:35:35 -- scripts/common.sh@338 -- # local 'op=<' 00:03:09.711 12:35:35 -- scripts/common.sh@340 -- # ver1_l=2 00:03:09.711 12:35:35 -- scripts/common.sh@341 -- # ver2_l=1 00:03:09.711 12:35:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:09.711 12:35:35 -- scripts/common.sh@344 -- # case "$op" in 00:03:09.711 12:35:35 -- scripts/common.sh@345 -- # : 1 00:03:09.711 12:35:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:09.711 12:35:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:09.711 12:35:35 -- scripts/common.sh@365 -- # decimal 1 00:03:09.711 12:35:35 -- scripts/common.sh@353 -- # local d=1 00:03:09.711 12:35:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:09.711 12:35:35 -- scripts/common.sh@355 -- # echo 1 00:03:09.711 12:35:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:09.711 12:35:35 -- scripts/common.sh@366 -- # decimal 2 00:03:09.711 12:35:35 -- scripts/common.sh@353 -- # local d=2 00:03:09.711 12:35:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:09.711 12:35:35 -- scripts/common.sh@355 -- # echo 2 00:03:09.711 12:35:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:09.711 12:35:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:09.711 12:35:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:09.711 12:35:35 -- scripts/common.sh@368 -- # return 0 00:03:09.711 12:35:35 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:09.711 12:35:35 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:09.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.711 --rc genhtml_branch_coverage=1 00:03:09.711 --rc genhtml_function_coverage=1 00:03:09.711 --rc genhtml_legend=1 00:03:09.711 --rc geninfo_all_blocks=1 00:03:09.711 --rc geninfo_unexecuted_blocks=1 00:03:09.711 00:03:09.711 ' 00:03:09.711 12:35:35 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:09.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.711 --rc genhtml_branch_coverage=1 00:03:09.711 --rc genhtml_function_coverage=1 00:03:09.711 --rc genhtml_legend=1 00:03:09.711 --rc geninfo_all_blocks=1 00:03:09.711 --rc geninfo_unexecuted_blocks=1 00:03:09.711 00:03:09.711 ' 00:03:09.711 12:35:35 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:09.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.711 --rc genhtml_branch_coverage=1 00:03:09.711 --rc genhtml_function_coverage=1 00:03:09.711 --rc genhtml_legend=1 00:03:09.711 --rc geninfo_all_blocks=1 00:03:09.711 --rc geninfo_unexecuted_blocks=1 00:03:09.711 00:03:09.711 ' 00:03:09.711 12:35:35 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:09.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:09.711 --rc genhtml_branch_coverage=1 00:03:09.711 --rc genhtml_function_coverage=1 00:03:09.711 --rc genhtml_legend=1 00:03:09.711 --rc geninfo_all_blocks=1 00:03:09.711 --rc geninfo_unexecuted_blocks=1 00:03:09.711 00:03:09.711 ' 00:03:09.711 12:35:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:09.711 12:35:35 -- nvmf/common.sh@7 -- # uname -s 00:03:09.711 12:35:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:09.711 12:35:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:09.711 12:35:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:09.711 12:35:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:09.711 12:35:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:09.711 12:35:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:09.711 12:35:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:09.711 12:35:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:09.711 12:35:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:09.711 12:35:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:09.712 12:35:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fe012ac3-ab77-41b9-bd8c-e89873fa6c26 00:03:09.712 12:35:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=fe012ac3-ab77-41b9-bd8c-e89873fa6c26 00:03:09.712 12:35:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:09.712 12:35:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:09.712 12:35:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:09.712 12:35:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:09.712 12:35:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:09.712 12:35:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:09.712 12:35:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:09.712 12:35:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:09.712 12:35:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:09.712 12:35:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.712 12:35:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.712 12:35:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.712 12:35:35 -- paths/export.sh@5 -- # export PATH 00:03:09.712 12:35:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.712 12:35:35 -- nvmf/common.sh@51 -- # : 0 00:03:09.712 12:35:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:09.712 12:35:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:09.712 12:35:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:09.712 12:35:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:09.712 12:35:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:09.712 12:35:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:09.712 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:09.712 12:35:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:09.712 12:35:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:09.712 12:35:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:09.712 12:35:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:09.712 12:35:35 -- spdk/autotest.sh@32 -- # uname -s 00:03:09.712 12:35:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:09.712 12:35:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:09.712 12:35:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:09.712 12:35:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:09.712 12:35:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:09.712 12:35:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:09.712 12:35:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:09.712 12:35:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:09.712 12:35:35 -- spdk/autotest.sh@48 -- # udevadm_pid=54252 00:03:09.712 12:35:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:09.712 12:35:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:09.712 12:35:35 -- pm/common@17 -- # local monitor 00:03:09.712 12:35:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.712 12:35:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.712 12:35:35 -- pm/common@25 -- # sleep 1 00:03:09.712 12:35:35 -- pm/common@21 -- # date +%s 00:03:09.712 12:35:35 -- pm/common@21 -- # date +%s 00:03:09.712 12:35:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732106135 00:03:09.712 12:35:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732106135 00:03:09.712 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732106135_collect-cpu-load.pm.log 00:03:09.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732106135_collect-vmstat.pm.log 00:03:10.903 12:35:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.903 12:35:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.903 12:35:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.903 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:03:10.903 12:35:36 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.903 12:35:36 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:10.903 12:35:36 -- common/autotest_common.sh@10 -- # set +x 00:03:10.903 12:35:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:10.903 12:35:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:10.903 12:35:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:10.903 12:35:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:10.903 12:35:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:10.903 12:35:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:10.903 12:35:36 -- common/autotest_common.sh@1457 -- # uname 00:03:10.903 12:35:36 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:10.903 12:35:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:10.903 12:35:36 -- common/autotest_common.sh@1477 -- # uname 00:03:10.903 12:35:36 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:10.903 12:35:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:10.903 12:35:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:10.903 lcov: LCOV version 1.15 00:03:10.903 12:35:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:25.814 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:40.723 12:36:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:40.723 12:36:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.723 12:36:05 -- common/autotest_common.sh@10 -- # set +x 00:03:40.723 12:36:05 -- spdk/autotest.sh@78 -- # rm -f 00:03:40.723 12:36:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.723 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:40.723 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:40.723 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:40.723 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:40.723 12:36:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:40.723 12:36:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:40.723 12:36:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:40.723 12:36:06 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:40.723 12:36:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.723 12:36:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.723 12:36:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.723 12:36:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.723 12:36:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:03:40.723 12:36:06 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:40.723 12:36:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.723 12:36:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:03:40.723 12:36:06 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:40.723 12:36:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.723 12:36:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.723 12:36:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:40.723 12:36:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:40.723 12:36:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.723 12:36:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:40.723 12:36:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.723 12:36:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.723 12:36:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:40.723 12:36:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:40.723 12:36:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:40.723 No valid GPT data, bailing 00:03:40.723 12:36:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.723 12:36:06 -- scripts/common.sh@394 -- # pt= 00:03:40.723 12:36:06 -- scripts/common.sh@395 -- # return 1 00:03:40.723 12:36:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:40.723 1+0 records in 00:03:40.723 1+0 records out 00:03:40.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023474 s, 44.7 MB/s 00:03:40.723 12:36:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.723 12:36:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.723 12:36:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:40.723 12:36:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:40.723 12:36:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:40.723 No valid GPT data, bailing 00:03:40.723 12:36:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:40.723 12:36:06 -- scripts/common.sh@394 -- # pt= 00:03:40.723 12:36:06 -- scripts/common.sh@395 -- # return 1 00:03:40.723 12:36:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:40.723 1+0 records in 00:03:40.723 1+0 records out 00:03:40.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00364228 s, 288 MB/s 00:03:40.724 12:36:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.724 12:36:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.724 12:36:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:40.724 12:36:06 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:40.724 12:36:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:40.724 No valid GPT data, bailing 00:03:40.724 12:36:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:40.724 12:36:06 -- scripts/common.sh@394 -- # pt= 00:03:40.724 12:36:06 -- scripts/common.sh@395 -- # return 1 00:03:40.724 12:36:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:40.982 1+0 records in 00:03:40.982 1+0 records out 00:03:40.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430443 s, 244 MB/s 00:03:40.982 12:36:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.982 12:36:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.982 12:36:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:40.982 12:36:06 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:40.982 12:36:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:40.982 No valid GPT data, bailing 00:03:40.982 12:36:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:40.982 12:36:06 -- scripts/common.sh@394 -- # pt= 00:03:40.982 12:36:06 -- scripts/common.sh@395 -- # return 1 00:03:40.982 12:36:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:40.982 1+0 records in 00:03:40.982 1+0 records out 00:03:40.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00374819 s, 280 MB/s 00:03:40.982 12:36:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.982 12:36:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.982 12:36:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:40.982 12:36:06 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:40.982 12:36:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:40.982 No valid GPT data, bailing 00:03:40.982 12:36:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:40.982 12:36:06 -- scripts/common.sh@394 -- # pt= 00:03:40.982 12:36:06 -- scripts/common.sh@395 -- # return 1 00:03:40.982 12:36:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:40.982 1+0 records in 00:03:40.982 1+0 records out 00:03:40.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00326877 s, 321 MB/s 00:03:40.982 12:36:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.982 12:36:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.982 12:36:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:40.982 12:36:06 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:40.982 12:36:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:40.982 No valid GPT data, bailing 00:03:40.982 12:36:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:40.982 12:36:06 -- scripts/common.sh@394 -- # pt= 00:03:40.982 12:36:06 -- scripts/common.sh@395 -- # return 1 00:03:40.982 12:36:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:40.982 1+0 records in 00:03:40.982 1+0 records out 00:03:40.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458966 s, 228 MB/s 00:03:40.982 12:36:06 -- spdk/autotest.sh@105 -- # sync 00:03:40.982 12:36:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:40.982 12:36:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:40.982 12:36:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:42.922 12:36:08 -- spdk/autotest.sh@111 -- # uname -s 00:03:42.922 12:36:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:42.922 12:36:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:42.922 12:36:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:43.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.437 Hugepages 00:03:43.437 node hugesize free / total 00:03:43.437 node0 1048576kB 0 / 0 00:03:43.437 node0 2048kB 0 / 0 00:03:43.437 00:03:43.437 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.437 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:43.694 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:43.694 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:43.694 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:43.694 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:43.694 12:36:09 -- spdk/autotest.sh@117 -- # uname -s 00:03:43.694 12:36:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:43.694 12:36:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:43.694 12:36:09 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.825 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.825 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.825 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.825 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.825 12:36:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:45.760 12:36:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:45.760 12:36:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:45.760 12:36:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:45.760 12:36:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:45.760 12:36:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:45.760 12:36:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:45.760 12:36:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:45.760 12:36:11 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:45.760 12:36:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:45.760 12:36:11 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:45.760 12:36:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:45.760 12:36:11 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:46.018 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.276 Waiting for block devices as requested 00:03:46.276 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.534 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.534 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:46.535 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:51.798 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:51.798 12:36:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.798 12:36:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:51.798 12:36:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.798 12:36:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:51.798 12:36:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:51.798 12:36:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:51.799 12:36:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:51.799 12:36:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.799 12:36:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1543 -- # continue 00:03:51.799 12:36:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.799 12:36:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.799 12:36:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1543 -- # continue 00:03:51.799 12:36:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.799 12:36:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.799 12:36:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1543 -- # continue 00:03:51.799 12:36:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.799 12:36:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:03:51.799 12:36:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.799 12:36:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.799 12:36:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.799 12:36:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.799 12:36:17 -- common/autotest_common.sh@1543 -- # continue 00:03:51.799 12:36:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:51.799 12:36:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.799 12:36:17 -- common/autotest_common.sh@10 -- # set +x 00:03:51.799 12:36:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:51.799 12:36:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.799 12:36:17 -- common/autotest_common.sh@10 -- # set +x 00:03:51.799 12:36:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.622 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.622 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.622 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.622 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.880 12:36:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:52.880 12:36:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.880 12:36:18 -- common/autotest_common.sh@10 -- # set +x 00:03:52.880 12:36:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:52.880 12:36:18 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:52.880 12:36:18 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:52.880 12:36:18 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:52.880 12:36:18 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:52.880 12:36:18 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:52.880 12:36:18 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:52.880 12:36:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:52.880 12:36:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:52.880 12:36:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:52.880 12:36:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:52.880 12:36:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:52.880 12:36:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:52.880 12:36:18 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:52.880 12:36:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:52.880 12:36:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:52.880 12:36:18 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.880 12:36:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:52.880 12:36:18 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.880 12:36:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:52.880 12:36:18 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.880 12:36:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:52.880 12:36:18 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:52.880 12:36:18 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:52.880 12:36:18 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:52.880 12:36:18 -- common/autotest_common.sh@1572 -- # return 0 00:03:52.880 12:36:18 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:52.880 12:36:18 -- common/autotest_common.sh@1580 -- # return 0 00:03:52.880 12:36:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:52.880 12:36:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:52.880 12:36:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:52.880 12:36:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:52.880 12:36:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:52.880 12:36:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.880 12:36:18 -- common/autotest_common.sh@10 -- # set +x 00:03:52.880 12:36:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:52.880 12:36:18 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:52.880 12:36:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.880 12:36:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.880 12:36:18 -- common/autotest_common.sh@10 -- # set +x 00:03:52.880 ************************************ 00:03:52.880 START TEST env 00:03:52.880 ************************************ 00:03:52.880 12:36:18 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:52.880 * Looking for test storage... 00:03:52.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:52.880 12:36:18 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.880 12:36:18 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.880 12:36:18 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:53.139 12:36:18 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:53.139 12:36:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.139 12:36:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.139 12:36:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.139 12:36:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.139 12:36:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.139 12:36:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.139 12:36:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.139 12:36:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.139 12:36:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.139 12:36:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.139 12:36:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.139 12:36:18 env -- scripts/common.sh@344 -- # case "$op" in 00:03:53.139 12:36:18 env -- scripts/common.sh@345 -- # : 1 00:03:53.139 12:36:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.139 12:36:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.139 12:36:18 env -- scripts/common.sh@365 -- # decimal 1 00:03:53.139 12:36:18 env -- scripts/common.sh@353 -- # local d=1 00:03:53.139 12:36:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.139 12:36:18 env -- scripts/common.sh@355 -- # echo 1 00:03:53.139 12:36:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.139 12:36:18 env -- scripts/common.sh@366 -- # decimal 2 00:03:53.139 12:36:18 env -- scripts/common.sh@353 -- # local d=2 00:03:53.139 12:36:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.139 12:36:18 env -- scripts/common.sh@355 -- # echo 2 00:03:53.139 12:36:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.139 12:36:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.139 12:36:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.139 12:36:18 env -- scripts/common.sh@368 -- # return 0 00:03:53.139 12:36:18 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.139 12:36:18 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:53.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.139 --rc genhtml_branch_coverage=1 00:03:53.139 --rc genhtml_function_coverage=1 00:03:53.139 --rc genhtml_legend=1 00:03:53.139 --rc geninfo_all_blocks=1 00:03:53.139 --rc geninfo_unexecuted_blocks=1 00:03:53.139 00:03:53.139 ' 00:03:53.139 12:36:18 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:53.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.139 --rc genhtml_branch_coverage=1 00:03:53.139 --rc genhtml_function_coverage=1 00:03:53.139 --rc genhtml_legend=1 00:03:53.139 --rc geninfo_all_blocks=1 00:03:53.139 --rc geninfo_unexecuted_blocks=1 00:03:53.139 00:03:53.139 ' 00:03:53.139 12:36:18 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:53.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.139 --rc genhtml_branch_coverage=1 00:03:53.139 --rc genhtml_function_coverage=1 00:03:53.139 --rc genhtml_legend=1 00:03:53.139 --rc geninfo_all_blocks=1 00:03:53.139 --rc geninfo_unexecuted_blocks=1 00:03:53.140 00:03:53.140 ' 00:03:53.140 12:36:18 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:53.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.140 --rc genhtml_branch_coverage=1 00:03:53.140 --rc genhtml_function_coverage=1 00:03:53.140 --rc genhtml_legend=1 00:03:53.140 --rc geninfo_all_blocks=1 00:03:53.140 --rc geninfo_unexecuted_blocks=1 00:03:53.140 00:03:53.140 ' 00:03:53.140 12:36:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.140 12:36:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.140 12:36:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.140 12:36:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.140 ************************************ 00:03:53.140 START TEST env_memory 00:03:53.140 ************************************ 00:03:53.140 12:36:18 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.140 00:03:53.140 00:03:53.140 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.140 http://cunit.sourceforge.net/ 00:03:53.140 00:03:53.140 00:03:53.140 Suite: memory 00:03:53.140 Test: alloc and free memory map ...[2024-11-20 12:36:18.482606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:53.140 passed 00:03:53.140 Test: mem map translation ...[2024-11-20 12:36:18.523374] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:53.140 [2024-11-20 12:36:18.523485] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:53.140 [2024-11-20 12:36:18.523578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:53.140 [2024-11-20 12:36:18.523604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.140 passed 00:03:53.140 Test: mem map registration ...[2024-11-20 12:36:18.592877] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:53.140 [2024-11-20 12:36:18.592974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:53.140 passed 00:03:53.398 Test: mem map adjacent registrations ...passed 00:03:53.398 00:03:53.398 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.398 suites 1 1 n/a 0 0 00:03:53.398 tests 4 4 4 0 0 00:03:53.398 asserts 152 152 152 0 n/a 00:03:53.398 00:03:53.398 Elapsed time = 0.244 seconds 00:03:53.398 00:03:53.398 real 0m0.282s 00:03:53.398 user 0m0.246s 00:03:53.398 sys 0m0.028s 00:03:53.398 12:36:18 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.398 12:36:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:53.398 ************************************ 00:03:53.398 END TEST env_memory 00:03:53.398 ************************************ 00:03:53.398 12:36:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.398 12:36:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.398 12:36:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.398 12:36:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.398 ************************************ 00:03:53.398 START TEST env_vtophys 00:03:53.398 ************************************ 00:03:53.398 12:36:18 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.398 EAL: lib.eal log level changed from notice to debug 00:03:53.398 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 1 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 2 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 3 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 4 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 5 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 6 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 7 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 8 as core 0 on socket 0 00:03:53.398 EAL: Detected lcore 9 as core 0 on socket 0 00:03:53.398 EAL: Maximum logical cores by configuration: 128 00:03:53.398 EAL: Detected CPU lcores: 10 00:03:53.398 EAL: Detected NUMA nodes: 1 00:03:53.398 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:53.398 EAL: Detected shared linkage of DPDK 00:03:53.398 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.398 EAL: Selected IOVA mode 'PA' 00:03:53.398 EAL: Probing VFIO support... 00:03:53.398 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.398 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:53.398 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.398 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.398 EAL: Setting up physically contiguous memory... 00:03:53.398 EAL: Setting maximum number of open files to 524288 00:03:53.398 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.398 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.398 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.398 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.398 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.398 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.398 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.398 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.398 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.398 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.398 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.398 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.398 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.398 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.398 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.398 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.398 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.398 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.398 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.398 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.398 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.398 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.398 EAL: Hugepages will be freed exactly as allocated. 00:03:53.398 EAL: No shared files mode enabled, IPC is disabled 00:03:53.398 EAL: No shared files mode enabled, IPC is disabled 00:03:53.678 EAL: TSC frequency is ~2600000 KHz 00:03:53.678 EAL: Main lcore 0 is ready (tid=7faff9227a40;cpuset=[0]) 00:03:53.678 EAL: Trying to obtain current memory policy. 00:03:53.678 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.678 EAL: Restoring previous memory policy: 0 00:03:53.678 EAL: request: mp_malloc_sync 00:03:53.678 EAL: No shared files mode enabled, IPC is disabled 00:03:53.678 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.678 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.678 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:53.678 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.678 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:53.678 00:03:53.678 00:03:53.678 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.678 http://cunit.sourceforge.net/ 00:03:53.678 00:03:53.678 00:03:53.678 Suite: components_suite 00:03:53.935 Test: vtophys_malloc_test ...passed 00:03:53.935 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.935 EAL: Trying to obtain current memory policy. 00:03:53.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.935 EAL: Restoring previous memory policy: 4 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.935 EAL: request: mp_malloc_sync 00:03:53.935 EAL: No shared files mode enabled, IPC is disabled 00:03:53.935 EAL: Heap on socket 0 was shrunk by 34MB 00:03:54.193 EAL: Trying to obtain current memory policy. 00:03:54.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.193 EAL: Restoring previous memory policy: 4 00:03:54.193 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.193 EAL: request: mp_malloc_sync 00:03:54.193 EAL: No shared files mode enabled, IPC is disabled 00:03:54.193 EAL: Heap on socket 0 was expanded by 66MB 00:03:54.193 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.193 EAL: request: mp_malloc_sync 00:03:54.193 EAL: No shared files mode enabled, IPC is disabled 00:03:54.193 EAL: Heap on socket 0 was shrunk by 66MB 00:03:54.193 EAL: Trying to obtain current memory policy. 00:03:54.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.193 EAL: Restoring previous memory policy: 4 00:03:54.193 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.194 EAL: request: mp_malloc_sync 00:03:54.194 EAL: No shared files mode enabled, IPC is disabled 00:03:54.194 EAL: Heap on socket 0 was expanded by 130MB 00:03:54.452 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.452 EAL: request: mp_malloc_sync 00:03:54.452 EAL: No shared files mode enabled, IPC is disabled 00:03:54.452 EAL: Heap on socket 0 was shrunk by 130MB 00:03:54.452 EAL: Trying to obtain current memory policy. 00:03:54.452 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.710 EAL: Restoring previous memory policy: 4 00:03:54.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.710 EAL: request: mp_malloc_sync 00:03:54.710 EAL: No shared files mode enabled, IPC is disabled 00:03:54.710 EAL: Heap on socket 0 was expanded by 258MB 00:03:54.968 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.968 EAL: request: mp_malloc_sync 00:03:54.968 EAL: No shared files mode enabled, IPC is disabled 00:03:54.968 EAL: Heap on socket 0 was shrunk by 258MB 00:03:55.227 EAL: Trying to obtain current memory policy. 00:03:55.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.227 EAL: Restoring previous memory policy: 4 00:03:55.227 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.227 EAL: request: mp_malloc_sync 00:03:55.227 EAL: No shared files mode enabled, IPC is disabled 00:03:55.227 EAL: Heap on socket 0 was expanded by 514MB 00:03:55.794 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.051 EAL: request: mp_malloc_sync 00:03:56.051 EAL: No shared files mode enabled, IPC is disabled 00:03:56.051 EAL: Heap on socket 0 was shrunk by 514MB 00:03:56.309 EAL: Trying to obtain current memory policy. 00:03:56.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.568 EAL: Restoring previous memory policy: 4 00:03:56.568 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.568 EAL: request: mp_malloc_sync 00:03:56.568 EAL: No shared files mode enabled, IPC is disabled 00:03:56.568 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.500 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.758 EAL: request: mp_malloc_sync 00:03:57.758 EAL: No shared files mode enabled, IPC is disabled 00:03:57.758 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:58.689 passed 00:03:58.689 00:03:58.689 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.689 suites 1 1 n/a 0 0 00:03:58.689 tests 2 2 2 0 0 00:03:58.689 asserts 5698 5698 5698 0 n/a 00:03:58.689 00:03:58.689 Elapsed time = 4.862 seconds 00:03:58.689 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.689 EAL: request: mp_malloc_sync 00:03:58.689 EAL: No shared files mode enabled, IPC is disabled 00:03:58.689 EAL: Heap on socket 0 was shrunk by 2MB 00:03:58.689 EAL: No shared files mode enabled, IPC is disabled 00:03:58.689 EAL: No shared files mode enabled, IPC is disabled 00:03:58.689 EAL: No shared files mode enabled, IPC is disabled 00:03:58.689 00:03:58.689 real 0m5.130s 00:03:58.689 user 0m4.216s 00:03:58.689 sys 0m0.765s 00:03:58.689 12:36:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.689 ************************************ 00:03:58.689 END TEST env_vtophys 00:03:58.689 ************************************ 00:03:58.689 12:36:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:58.689 12:36:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:58.689 12:36:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.689 12:36:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.689 12:36:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.689 ************************************ 00:03:58.689 START TEST env_pci 00:03:58.689 ************************************ 00:03:58.689 12:36:23 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:58.689 00:03:58.689 00:03:58.689 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.689 http://cunit.sourceforge.net/ 00:03:58.689 00:03:58.689 00:03:58.689 Suite: pci 00:03:58.689 Test: pci_hook ...[2024-11-20 12:36:23.952557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57010 has claimed it 00:03:58.689 passed 00:03:58.689 00:03:58.689 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.689 suites 1 1 n/a 0 0 00:03:58.689 tests 1 1 1 0 0 00:03:58.689 asserts 25 25 25 0 n/a 00:03:58.689 00:03:58.689 Elapsed time = 0.009 seconds 00:03:58.689 EAL: Cannot find device (10000:00:01.0) 00:03:58.689 EAL: Failed to attach device on primary process 00:03:58.689 00:03:58.689 real 0m0.069s 00:03:58.689 user 0m0.030s 00:03:58.689 sys 0m0.037s 00:03:58.689 12:36:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.689 ************************************ 00:03:58.689 12:36:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:58.689 END TEST env_pci 00:03:58.689 ************************************ 00:03:58.689 12:36:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:58.689 12:36:24 env -- env/env.sh@15 -- # uname 00:03:58.689 12:36:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:58.689 12:36:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:58.689 12:36:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.689 12:36:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:58.689 12:36:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.689 12:36:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.689 ************************************ 00:03:58.689 START TEST env_dpdk_post_init 00:03:58.689 ************************************ 00:03:58.689 12:36:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.689 EAL: Detected CPU lcores: 10 00:03:58.689 EAL: Detected NUMA nodes: 1 00:03:58.689 EAL: Detected shared linkage of DPDK 00:03:58.689 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.689 EAL: Selected IOVA mode 'PA' 00:03:58.689 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:58.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:58.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:58.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:58.946 Starting DPDK initialization... 00:03:58.946 Starting SPDK post initialization... 00:03:58.946 SPDK NVMe probe 00:03:58.946 Attaching to 0000:00:10.0 00:03:58.946 Attaching to 0000:00:11.0 00:03:58.946 Attaching to 0000:00:12.0 00:03:58.946 Attaching to 0000:00:13.0 00:03:58.946 Attached to 0000:00:10.0 00:03:58.946 Attached to 0000:00:11.0 00:03:58.946 Attached to 0000:00:13.0 00:03:58.946 Attached to 0000:00:12.0 00:03:58.946 Cleaning up... 00:03:58.946 00:03:58.946 real 0m0.233s 00:03:58.946 user 0m0.072s 00:03:58.946 sys 0m0.062s 00:03:58.946 12:36:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.946 ************************************ 00:03:58.946 END TEST env_dpdk_post_init 00:03:58.946 ************************************ 00:03:58.946 12:36:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.946 12:36:24 env -- env/env.sh@26 -- # uname 00:03:58.946 12:36:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:58.946 12:36:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.946 12:36:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.946 12:36:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.946 12:36:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.946 ************************************ 00:03:58.946 START TEST env_mem_callbacks 00:03:58.946 ************************************ 00:03:58.946 12:36:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.946 EAL: Detected CPU lcores: 10 00:03:58.946 EAL: Detected NUMA nodes: 1 00:03:58.946 EAL: Detected shared linkage of DPDK 00:03:58.946 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.946 EAL: Selected IOVA mode 'PA' 00:03:59.202 00:03:59.202 00:03:59.202 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.202 http://cunit.sourceforge.net/ 00:03:59.203 00:03:59.203 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.203 00:03:59.203 Suite: memory 00:03:59.203 Test: test ... 00:03:59.203 register 0x200000200000 2097152 00:03:59.203 malloc 3145728 00:03:59.203 register 0x200000400000 4194304 00:03:59.203 buf 0x2000004fffc0 len 3145728 PASSED 00:03:59.203 malloc 64 00:03:59.203 buf 0x2000004ffec0 len 64 PASSED 00:03:59.203 malloc 4194304 00:03:59.203 register 0x200000800000 6291456 00:03:59.203 buf 0x2000009fffc0 len 4194304 PASSED 00:03:59.203 free 0x2000004fffc0 3145728 00:03:59.203 free 0x2000004ffec0 64 00:03:59.203 unregister 0x200000400000 4194304 PASSED 00:03:59.203 free 0x2000009fffc0 4194304 00:03:59.203 unregister 0x200000800000 6291456 PASSED 00:03:59.203 malloc 8388608 00:03:59.203 register 0x200000400000 10485760 00:03:59.203 buf 0x2000005fffc0 len 8388608 PASSED 00:03:59.203 free 0x2000005fffc0 8388608 00:03:59.203 unregister 0x200000400000 10485760 PASSED 00:03:59.203 passed 00:03:59.203 00:03:59.203 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.203 suites 1 1 n/a 0 0 00:03:59.203 tests 1 1 1 0 0 00:03:59.203 asserts 15 15 15 0 n/a 00:03:59.203 00:03:59.203 Elapsed time = 0.039 seconds 00:03:59.203 00:03:59.203 real 0m0.208s 00:03:59.203 user 0m0.059s 00:03:59.203 sys 0m0.048s 00:03:59.203 12:36:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.203 ************************************ 00:03:59.203 END TEST env_mem_callbacks 00:03:59.203 ************************************ 00:03:59.203 12:36:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:59.203 00:03:59.203 real 0m6.282s 00:03:59.203 user 0m4.786s 00:03:59.203 sys 0m1.135s 00:03:59.203 12:36:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.203 12:36:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.203 ************************************ 00:03:59.203 END TEST env 00:03:59.203 ************************************ 00:03:59.203 12:36:24 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:59.203 12:36:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.203 12:36:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.203 12:36:24 -- common/autotest_common.sh@10 -- # set +x 00:03:59.203 ************************************ 00:03:59.203 START TEST rpc 00:03:59.203 ************************************ 00:03:59.203 12:36:24 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:59.203 * Looking for test storage... 00:03:59.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:59.203 12:36:24 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.203 12:36:24 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.203 12:36:24 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.460 12:36:24 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.460 12:36:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.460 12:36:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.460 12:36:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.460 12:36:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.460 12:36:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.460 12:36:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.460 12:36:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.460 12:36:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:59.460 12:36:24 rpc -- scripts/common.sh@345 -- # : 1 00:03:59.460 12:36:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.460 12:36:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.460 12:36:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:59.460 12:36:24 rpc -- scripts/common.sh@353 -- # local d=1 00:03:59.460 12:36:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.460 12:36:24 rpc -- scripts/common.sh@355 -- # echo 1 00:03:59.460 12:36:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.460 12:36:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.460 12:36:24 rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.460 12:36:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.460 12:36:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.460 12:36:24 rpc -- scripts/common.sh@368 -- # return 0 00:03:59.460 12:36:24 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.460 12:36:24 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.460 --rc genhtml_branch_coverage=1 00:03:59.460 --rc genhtml_function_coverage=1 00:03:59.460 --rc genhtml_legend=1 00:03:59.460 --rc geninfo_all_blocks=1 00:03:59.460 --rc geninfo_unexecuted_blocks=1 00:03:59.460 00:03:59.460 ' 00:03:59.460 12:36:24 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.460 --rc genhtml_branch_coverage=1 00:03:59.460 --rc genhtml_function_coverage=1 00:03:59.460 --rc genhtml_legend=1 00:03:59.460 --rc geninfo_all_blocks=1 00:03:59.460 --rc geninfo_unexecuted_blocks=1 00:03:59.460 00:03:59.460 ' 00:03:59.460 12:36:24 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.460 --rc genhtml_branch_coverage=1 00:03:59.461 --rc genhtml_function_coverage=1 00:03:59.461 --rc genhtml_legend=1 00:03:59.461 --rc geninfo_all_blocks=1 00:03:59.461 --rc geninfo_unexecuted_blocks=1 00:03:59.461 00:03:59.461 ' 00:03:59.461 12:36:24 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.461 --rc genhtml_branch_coverage=1 00:03:59.461 --rc genhtml_function_coverage=1 00:03:59.461 --rc genhtml_legend=1 00:03:59.461 --rc geninfo_all_blocks=1 00:03:59.461 --rc geninfo_unexecuted_blocks=1 00:03:59.461 00:03:59.461 ' 00:03:59.461 12:36:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57137 00:03:59.461 12:36:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.461 12:36:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57137 00:03:59.461 12:36:24 rpc -- common/autotest_common.sh@835 -- # '[' -z 57137 ']' 00:03:59.461 12:36:24 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.461 12:36:24 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.461 12:36:24 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.461 12:36:24 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.461 12:36:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.461 12:36:24 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:59.461 [2024-11-20 12:36:24.810810] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:03:59.461 [2024-11-20 12:36:24.810930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57137 ] 00:03:59.461 [2024-11-20 12:36:24.966592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.718 [2024-11-20 12:36:25.064108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:59.718 [2024-11-20 12:36:25.064163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57137' to capture a snapshot of events at runtime. 00:03:59.718 [2024-11-20 12:36:25.064172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:59.718 [2024-11-20 12:36:25.064181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:59.718 [2024-11-20 12:36:25.064187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57137 for offline analysis/debug. 00:03:59.718 [2024-11-20 12:36:25.064926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.283 12:36:25 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.283 12:36:25 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.283 12:36:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.283 12:36:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.283 12:36:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:00.283 12:36:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:00.283 12:36:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.283 12:36:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.283 12:36:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.283 ************************************ 00:04:00.283 START TEST rpc_integrity 00:04:00.283 ************************************ 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.283 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.283 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.283 { 00:04:00.283 "name": "Malloc0", 00:04:00.283 "aliases": [ 00:04:00.283 "289802eb-1f34-4dc6-b237-c5354d9d0aa5" 00:04:00.283 ], 00:04:00.283 "product_name": "Malloc disk", 00:04:00.283 "block_size": 512, 00:04:00.283 "num_blocks": 16384, 00:04:00.283 "uuid": "289802eb-1f34-4dc6-b237-c5354d9d0aa5", 00:04:00.283 "assigned_rate_limits": { 00:04:00.283 "rw_ios_per_sec": 0, 00:04:00.283 "rw_mbytes_per_sec": 0, 00:04:00.283 "r_mbytes_per_sec": 0, 00:04:00.283 "w_mbytes_per_sec": 0 00:04:00.283 }, 00:04:00.283 "claimed": false, 00:04:00.283 "zoned": false, 00:04:00.283 "supported_io_types": { 00:04:00.283 "read": true, 00:04:00.283 "write": true, 00:04:00.284 "unmap": true, 00:04:00.284 "flush": true, 00:04:00.284 "reset": true, 00:04:00.284 "nvme_admin": false, 00:04:00.284 "nvme_io": false, 00:04:00.284 "nvme_io_md": false, 00:04:00.284 "write_zeroes": true, 00:04:00.284 "zcopy": true, 00:04:00.284 "get_zone_info": false, 00:04:00.284 "zone_management": false, 00:04:00.284 "zone_append": false, 00:04:00.284 "compare": false, 00:04:00.284 "compare_and_write": false, 00:04:00.284 "abort": true, 00:04:00.284 "seek_hole": false, 00:04:00.284 "seek_data": false, 00:04:00.284 "copy": true, 00:04:00.284 "nvme_iov_md": false 00:04:00.284 }, 00:04:00.284 "memory_domains": [ 00:04:00.284 { 00:04:00.284 "dma_device_id": "system", 00:04:00.284 "dma_device_type": 1 00:04:00.284 }, 00:04:00.284 { 00:04:00.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.284 "dma_device_type": 2 00:04:00.284 } 00:04:00.284 ], 00:04:00.284 "driver_specific": {} 00:04:00.284 } 00:04:00.284 ]' 00:04:00.284 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.284 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.284 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:00.284 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.284 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.284 [2024-11-20 12:36:25.767536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:00.284 [2024-11-20 12:36:25.767596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.284 [2024-11-20 12:36:25.767623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:00.284 [2024-11-20 12:36:25.767634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.284 [2024-11-20 12:36:25.769583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.284 [2024-11-20 12:36:25.769620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.284 Passthru0 00:04:00.284 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.284 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.284 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.284 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.284 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.284 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.284 { 00:04:00.284 "name": "Malloc0", 00:04:00.284 "aliases": [ 00:04:00.284 "289802eb-1f34-4dc6-b237-c5354d9d0aa5" 00:04:00.284 ], 00:04:00.284 "product_name": "Malloc disk", 00:04:00.284 "block_size": 512, 00:04:00.284 "num_blocks": 16384, 00:04:00.284 "uuid": "289802eb-1f34-4dc6-b237-c5354d9d0aa5", 00:04:00.284 "assigned_rate_limits": { 00:04:00.284 "rw_ios_per_sec": 0, 00:04:00.284 "rw_mbytes_per_sec": 0, 00:04:00.284 "r_mbytes_per_sec": 0, 00:04:00.284 "w_mbytes_per_sec": 0 00:04:00.284 }, 00:04:00.284 "claimed": true, 00:04:00.284 "claim_type": "exclusive_write", 00:04:00.284 "zoned": false, 00:04:00.284 "supported_io_types": { 00:04:00.284 "read": true, 00:04:00.284 "write": true, 00:04:00.284 "unmap": true, 00:04:00.284 "flush": true, 00:04:00.284 "reset": true, 00:04:00.284 "nvme_admin": false, 00:04:00.284 "nvme_io": false, 00:04:00.284 "nvme_io_md": false, 00:04:00.284 "write_zeroes": true, 00:04:00.284 "zcopy": true, 00:04:00.284 "get_zone_info": false, 00:04:00.284 "zone_management": false, 00:04:00.284 "zone_append": false, 00:04:00.284 "compare": false, 00:04:00.284 "compare_and_write": false, 00:04:00.284 "abort": true, 00:04:00.284 "seek_hole": false, 00:04:00.284 "seek_data": false, 00:04:00.284 "copy": true, 00:04:00.284 "nvme_iov_md": false 00:04:00.284 }, 00:04:00.284 "memory_domains": [ 00:04:00.284 { 00:04:00.284 "dma_device_id": "system", 00:04:00.284 "dma_device_type": 1 00:04:00.284 }, 00:04:00.284 { 00:04:00.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.284 "dma_device_type": 2 00:04:00.284 } 00:04:00.284 ], 00:04:00.284 "driver_specific": {} 00:04:00.284 }, 00:04:00.284 { 00:04:00.284 "name": "Passthru0", 00:04:00.284 "aliases": [ 00:04:00.284 "941c9fe5-1791-557a-80b1-8030d1167cf0" 00:04:00.284 ], 00:04:00.284 "product_name": "passthru", 00:04:00.284 "block_size": 512, 00:04:00.284 "num_blocks": 16384, 00:04:00.284 "uuid": "941c9fe5-1791-557a-80b1-8030d1167cf0", 00:04:00.284 "assigned_rate_limits": { 00:04:00.284 "rw_ios_per_sec": 0, 00:04:00.284 "rw_mbytes_per_sec": 0, 00:04:00.284 "r_mbytes_per_sec": 0, 00:04:00.284 "w_mbytes_per_sec": 0 00:04:00.284 }, 00:04:00.284 "claimed": false, 00:04:00.284 "zoned": false, 00:04:00.284 "supported_io_types": { 00:04:00.284 "read": true, 00:04:00.284 "write": true, 00:04:00.284 "unmap": true, 00:04:00.284 "flush": true, 00:04:00.284 "reset": true, 00:04:00.284 "nvme_admin": false, 00:04:00.284 "nvme_io": false, 00:04:00.284 "nvme_io_md": false, 00:04:00.284 "write_zeroes": true, 00:04:00.284 "zcopy": true, 00:04:00.284 "get_zone_info": false, 00:04:00.284 "zone_management": false, 00:04:00.284 "zone_append": false, 00:04:00.284 "compare": false, 00:04:00.284 "compare_and_write": false, 00:04:00.284 "abort": true, 00:04:00.284 "seek_hole": false, 00:04:00.284 "seek_data": false, 00:04:00.284 "copy": true, 00:04:00.284 "nvme_iov_md": false 00:04:00.284 }, 00:04:00.284 "memory_domains": [ 00:04:00.284 { 00:04:00.284 "dma_device_id": "system", 00:04:00.284 "dma_device_type": 1 00:04:00.284 }, 00:04:00.284 { 00:04:00.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.284 "dma_device_type": 2 00:04:00.284 } 00:04:00.284 ], 00:04:00.284 "driver_specific": { 00:04:00.284 "passthru": { 00:04:00.284 "name": "Passthru0", 00:04:00.284 "base_bdev_name": "Malloc0" 00:04:00.284 } 00:04:00.284 } 00:04:00.284 } 00:04:00.284 ]' 00:04:00.284 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:00.542 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.542 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.542 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.542 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.542 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.542 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.542 12:36:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.542 00:04:00.542 real 0m0.239s 00:04:00.542 user 0m0.128s 00:04:00.542 sys 0m0.035s 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.542 ************************************ 00:04:00.542 END TEST rpc_integrity 00:04:00.542 ************************************ 00:04:00.542 12:36:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.542 12:36:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.542 12:36:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.542 12:36:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 ************************************ 00:04:00.542 START TEST rpc_plugins 00:04:00.542 ************************************ 00:04:00.542 12:36:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:00.542 12:36:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.542 12:36:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.542 12:36:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.542 12:36:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.542 12:36:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.542 12:36:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.542 12:36:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.542 12:36:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.542 { 00:04:00.542 "name": "Malloc1", 00:04:00.542 "aliases": [ 00:04:00.542 "8cd91e3c-0886-480a-b321-ebd3f598ce03" 00:04:00.542 ], 00:04:00.542 "product_name": "Malloc disk", 00:04:00.542 "block_size": 4096, 00:04:00.542 "num_blocks": 256, 00:04:00.542 "uuid": "8cd91e3c-0886-480a-b321-ebd3f598ce03", 00:04:00.542 "assigned_rate_limits": { 00:04:00.542 "rw_ios_per_sec": 0, 00:04:00.542 "rw_mbytes_per_sec": 0, 00:04:00.542 "r_mbytes_per_sec": 0, 00:04:00.542 "w_mbytes_per_sec": 0 00:04:00.542 }, 00:04:00.542 "claimed": false, 00:04:00.542 "zoned": false, 00:04:00.542 "supported_io_types": { 00:04:00.542 "read": true, 00:04:00.542 "write": true, 00:04:00.542 "unmap": true, 00:04:00.542 "flush": true, 00:04:00.542 "reset": true, 00:04:00.542 "nvme_admin": false, 00:04:00.542 "nvme_io": false, 00:04:00.542 "nvme_io_md": false, 00:04:00.542 "write_zeroes": true, 00:04:00.542 "zcopy": true, 00:04:00.542 "get_zone_info": false, 00:04:00.542 "zone_management": false, 00:04:00.542 "zone_append": false, 00:04:00.542 "compare": false, 00:04:00.542 "compare_and_write": false, 00:04:00.542 "abort": true, 00:04:00.542 "seek_hole": false, 00:04:00.542 "seek_data": false, 00:04:00.542 "copy": true, 00:04:00.542 "nvme_iov_md": false 00:04:00.542 }, 00:04:00.542 "memory_domains": [ 00:04:00.542 { 00:04:00.542 "dma_device_id": "system", 00:04:00.542 "dma_device_type": 1 00:04:00.542 }, 00:04:00.542 { 00:04:00.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.542 "dma_device_type": 2 00:04:00.542 } 00:04:00.542 ], 00:04:00.542 "driver_specific": {} 00:04:00.542 } 00:04:00.542 ]' 00:04:00.542 12:36:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:00.542 12:36:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.542 12:36:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.542 12:36:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.542 12:36:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.542 12:36:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:00.542 12:36:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:00.542 00:04:00.542 real 0m0.120s 00:04:00.542 user 0m0.066s 00:04:00.542 sys 0m0.019s 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.542 12:36:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.542 ************************************ 00:04:00.542 END TEST rpc_plugins 00:04:00.542 ************************************ 00:04:00.801 12:36:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:00.801 12:36:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.801 12:36:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.801 12:36:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.801 ************************************ 00:04:00.801 START TEST rpc_trace_cmd_test 00:04:00.801 ************************************ 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:00.801 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57137", 00:04:00.801 "tpoint_group_mask": "0x8", 00:04:00.801 "iscsi_conn": { 00:04:00.801 "mask": "0x2", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "scsi": { 00:04:00.801 "mask": "0x4", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "bdev": { 00:04:00.801 "mask": "0x8", 00:04:00.801 "tpoint_mask": "0xffffffffffffffff" 00:04:00.801 }, 00:04:00.801 "nvmf_rdma": { 00:04:00.801 "mask": "0x10", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "nvmf_tcp": { 00:04:00.801 "mask": "0x20", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "ftl": { 00:04:00.801 "mask": "0x40", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "blobfs": { 00:04:00.801 "mask": "0x80", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "dsa": { 00:04:00.801 "mask": "0x200", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "thread": { 00:04:00.801 "mask": "0x400", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "nvme_pcie": { 00:04:00.801 "mask": "0x800", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "iaa": { 00:04:00.801 "mask": "0x1000", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "nvme_tcp": { 00:04:00.801 "mask": "0x2000", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "bdev_nvme": { 00:04:00.801 "mask": "0x4000", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "sock": { 00:04:00.801 "mask": "0x8000", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "blob": { 00:04:00.801 "mask": "0x10000", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "bdev_raid": { 00:04:00.801 "mask": "0x20000", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 }, 00:04:00.801 "scheduler": { 00:04:00.801 "mask": "0x40000", 00:04:00.801 "tpoint_mask": "0x0" 00:04:00.801 } 00:04:00.801 }' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.801 00:04:00.801 real 0m0.169s 00:04:00.801 user 0m0.144s 00:04:00.801 sys 0m0.018s 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.801 12:36:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.801 ************************************ 00:04:00.801 END TEST rpc_trace_cmd_test 00:04:00.801 ************************************ 00:04:00.801 12:36:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.801 12:36:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.801 12:36:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.801 12:36:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.801 12:36:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.801 12:36:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.801 ************************************ 00:04:00.801 START TEST rpc_daemon_integrity 00:04:00.801 ************************************ 00:04:00.801 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:00.801 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.801 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.801 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.801 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.801 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.801 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.060 { 00:04:01.060 "name": "Malloc2", 00:04:01.060 "aliases": [ 00:04:01.060 "3c7f6097-bd9b-4c46-b2d6-ca4fb5ce3d24" 00:04:01.060 ], 00:04:01.060 "product_name": "Malloc disk", 00:04:01.060 "block_size": 512, 00:04:01.060 "num_blocks": 16384, 00:04:01.060 "uuid": "3c7f6097-bd9b-4c46-b2d6-ca4fb5ce3d24", 00:04:01.060 "assigned_rate_limits": { 00:04:01.060 "rw_ios_per_sec": 0, 00:04:01.060 "rw_mbytes_per_sec": 0, 00:04:01.060 "r_mbytes_per_sec": 0, 00:04:01.060 "w_mbytes_per_sec": 0 00:04:01.060 }, 00:04:01.060 "claimed": false, 00:04:01.060 "zoned": false, 00:04:01.060 "supported_io_types": { 00:04:01.060 "read": true, 00:04:01.060 "write": true, 00:04:01.060 "unmap": true, 00:04:01.060 "flush": true, 00:04:01.060 "reset": true, 00:04:01.060 "nvme_admin": false, 00:04:01.060 "nvme_io": false, 00:04:01.060 "nvme_io_md": false, 00:04:01.060 "write_zeroes": true, 00:04:01.060 "zcopy": true, 00:04:01.060 "get_zone_info": false, 00:04:01.060 "zone_management": false, 00:04:01.060 "zone_append": false, 00:04:01.060 "compare": false, 00:04:01.060 "compare_and_write": false, 00:04:01.060 "abort": true, 00:04:01.060 "seek_hole": false, 00:04:01.060 "seek_data": false, 00:04:01.060 "copy": true, 00:04:01.060 "nvme_iov_md": false 00:04:01.060 }, 00:04:01.060 "memory_domains": [ 00:04:01.060 { 00:04:01.060 "dma_device_id": "system", 00:04:01.060 "dma_device_type": 1 00:04:01.060 }, 00:04:01.060 { 00:04:01.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.060 "dma_device_type": 2 00:04:01.060 } 00:04:01.060 ], 00:04:01.060 "driver_specific": {} 00:04:01.060 } 00:04:01.060 ]' 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 [2024-11-20 12:36:26.413373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:01.060 [2024-11-20 12:36:26.413426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.060 [2024-11-20 12:36:26.413445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:01.060 [2024-11-20 12:36:26.413455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.060 [2024-11-20 12:36:26.415301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.060 [2024-11-20 12:36:26.415335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.060 Passthru0 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.060 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.060 { 00:04:01.060 "name": "Malloc2", 00:04:01.060 "aliases": [ 00:04:01.060 "3c7f6097-bd9b-4c46-b2d6-ca4fb5ce3d24" 00:04:01.060 ], 00:04:01.060 "product_name": "Malloc disk", 00:04:01.060 "block_size": 512, 00:04:01.060 "num_blocks": 16384, 00:04:01.060 "uuid": "3c7f6097-bd9b-4c46-b2d6-ca4fb5ce3d24", 00:04:01.060 "assigned_rate_limits": { 00:04:01.060 "rw_ios_per_sec": 0, 00:04:01.060 "rw_mbytes_per_sec": 0, 00:04:01.060 "r_mbytes_per_sec": 0, 00:04:01.060 "w_mbytes_per_sec": 0 00:04:01.060 }, 00:04:01.060 "claimed": true, 00:04:01.061 "claim_type": "exclusive_write", 00:04:01.061 "zoned": false, 00:04:01.061 "supported_io_types": { 00:04:01.061 "read": true, 00:04:01.061 "write": true, 00:04:01.061 "unmap": true, 00:04:01.061 "flush": true, 00:04:01.061 "reset": true, 00:04:01.061 "nvme_admin": false, 00:04:01.061 "nvme_io": false, 00:04:01.061 "nvme_io_md": false, 00:04:01.061 "write_zeroes": true, 00:04:01.061 "zcopy": true, 00:04:01.061 "get_zone_info": false, 00:04:01.061 "zone_management": false, 00:04:01.061 "zone_append": false, 00:04:01.061 "compare": false, 00:04:01.061 "compare_and_write": false, 00:04:01.061 "abort": true, 00:04:01.061 "seek_hole": false, 00:04:01.061 "seek_data": false, 00:04:01.061 "copy": true, 00:04:01.061 "nvme_iov_md": false 00:04:01.061 }, 00:04:01.061 "memory_domains": [ 00:04:01.061 { 00:04:01.061 "dma_device_id": "system", 00:04:01.061 "dma_device_type": 1 00:04:01.061 }, 00:04:01.061 { 00:04:01.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.061 "dma_device_type": 2 00:04:01.061 } 00:04:01.061 ], 00:04:01.061 "driver_specific": {} 00:04:01.061 }, 00:04:01.061 { 00:04:01.061 "name": "Passthru0", 00:04:01.061 "aliases": [ 00:04:01.061 "747081d4-0579-52ca-849b-46530a5f04f5" 00:04:01.061 ], 00:04:01.061 "product_name": "passthru", 00:04:01.061 "block_size": 512, 00:04:01.061 "num_blocks": 16384, 00:04:01.061 "uuid": "747081d4-0579-52ca-849b-46530a5f04f5", 00:04:01.061 "assigned_rate_limits": { 00:04:01.061 "rw_ios_per_sec": 0, 00:04:01.061 "rw_mbytes_per_sec": 0, 00:04:01.061 "r_mbytes_per_sec": 0, 00:04:01.061 "w_mbytes_per_sec": 0 00:04:01.061 }, 00:04:01.061 "claimed": false, 00:04:01.061 "zoned": false, 00:04:01.061 "supported_io_types": { 00:04:01.061 "read": true, 00:04:01.061 "write": true, 00:04:01.061 "unmap": true, 00:04:01.061 "flush": true, 00:04:01.061 "reset": true, 00:04:01.061 "nvme_admin": false, 00:04:01.061 "nvme_io": false, 00:04:01.061 "nvme_io_md": false, 00:04:01.061 "write_zeroes": true, 00:04:01.061 "zcopy": true, 00:04:01.061 "get_zone_info": false, 00:04:01.061 "zone_management": false, 00:04:01.061 "zone_append": false, 00:04:01.061 "compare": false, 00:04:01.061 "compare_and_write": false, 00:04:01.061 "abort": true, 00:04:01.061 "seek_hole": false, 00:04:01.061 "seek_data": false, 00:04:01.061 "copy": true, 00:04:01.061 "nvme_iov_md": false 00:04:01.061 }, 00:04:01.061 "memory_domains": [ 00:04:01.061 { 00:04:01.061 "dma_device_id": "system", 00:04:01.061 "dma_device_type": 1 00:04:01.061 }, 00:04:01.061 { 00:04:01.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.061 "dma_device_type": 2 00:04:01.061 } 00:04:01.061 ], 00:04:01.061 "driver_specific": { 00:04:01.061 "passthru": { 00:04:01.061 "name": "Passthru0", 00:04:01.061 "base_bdev_name": "Malloc2" 00:04:01.061 } 00:04:01.061 } 00:04:01.061 } 00:04:01.061 ]' 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.061 00:04:01.061 real 0m0.232s 00:04:01.061 user 0m0.118s 00:04:01.061 sys 0m0.037s 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.061 ************************************ 00:04:01.061 END TEST rpc_daemon_integrity 00:04:01.061 ************************************ 00:04:01.061 12:36:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.061 12:36:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.061 12:36:26 rpc -- rpc/rpc.sh@84 -- # killprocess 57137 00:04:01.061 12:36:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 57137 ']' 00:04:01.061 12:36:26 rpc -- common/autotest_common.sh@958 -- # kill -0 57137 00:04:01.061 12:36:26 rpc -- common/autotest_common.sh@959 -- # uname 00:04:01.061 12:36:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.061 12:36:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57137 00:04:01.318 12:36:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.318 killing process with pid 57137 00:04:01.318 12:36:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.318 12:36:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57137' 00:04:01.318 12:36:26 rpc -- common/autotest_common.sh@973 -- # kill 57137 00:04:01.318 12:36:26 rpc -- common/autotest_common.sh@978 -- # wait 57137 00:04:02.691 00:04:02.691 real 0m3.242s 00:04:02.691 user 0m3.615s 00:04:02.691 sys 0m0.648s 00:04:02.691 12:36:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.691 12:36:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.691 ************************************ 00:04:02.691 END TEST rpc 00:04:02.691 ************************************ 00:04:02.691 12:36:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:02.691 12:36:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.691 12:36:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.691 12:36:27 -- common/autotest_common.sh@10 -- # set +x 00:04:02.691 ************************************ 00:04:02.691 START TEST skip_rpc 00:04:02.691 ************************************ 00:04:02.691 12:36:27 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:02.691 * Looking for test storage... 00:04:02.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.691 12:36:27 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.691 12:36:27 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.691 12:36:27 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.691 12:36:27 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.691 12:36:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.691 12:36:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.692 12:36:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.692 --rc genhtml_branch_coverage=1 00:04:02.692 --rc genhtml_function_coverage=1 00:04:02.692 --rc genhtml_legend=1 00:04:02.692 --rc geninfo_all_blocks=1 00:04:02.692 --rc geninfo_unexecuted_blocks=1 00:04:02.692 00:04:02.692 ' 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.692 --rc genhtml_branch_coverage=1 00:04:02.692 --rc genhtml_function_coverage=1 00:04:02.692 --rc genhtml_legend=1 00:04:02.692 --rc geninfo_all_blocks=1 00:04:02.692 --rc geninfo_unexecuted_blocks=1 00:04:02.692 00:04:02.692 ' 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.692 --rc genhtml_branch_coverage=1 00:04:02.692 --rc genhtml_function_coverage=1 00:04:02.692 --rc genhtml_legend=1 00:04:02.692 --rc geninfo_all_blocks=1 00:04:02.692 --rc geninfo_unexecuted_blocks=1 00:04:02.692 00:04:02.692 ' 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.692 --rc genhtml_branch_coverage=1 00:04:02.692 --rc genhtml_function_coverage=1 00:04:02.692 --rc genhtml_legend=1 00:04:02.692 --rc geninfo_all_blocks=1 00:04:02.692 --rc geninfo_unexecuted_blocks=1 00:04:02.692 00:04:02.692 ' 00:04:02.692 12:36:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:02.692 12:36:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:02.692 12:36:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.692 12:36:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.692 ************************************ 00:04:02.692 START TEST skip_rpc 00:04:02.692 ************************************ 00:04:02.692 12:36:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:02.692 12:36:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57344 00:04:02.692 12:36:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.692 12:36:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:02.692 12:36:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:02.692 [2024-11-20 12:36:28.094150] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:02.692 [2024-11-20 12:36:28.094268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57344 ] 00:04:02.951 [2024-11-20 12:36:28.256716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.951 [2024-11-20 12:36:28.369909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57344 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57344 ']' 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57344 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57344 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57344' 00:04:08.256 killing process with pid 57344 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57344 00:04:08.256 12:36:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57344 00:04:08.823 00:04:08.823 real 0m6.283s 00:04:08.823 user 0m5.871s 00:04:08.823 sys 0m0.310s 00:04:08.823 12:36:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.823 ************************************ 00:04:08.823 END TEST skip_rpc 00:04:08.823 ************************************ 00:04:08.823 12:36:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.823 12:36:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:08.823 12:36:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.823 12:36:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.823 12:36:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.082 ************************************ 00:04:09.082 START TEST skip_rpc_with_json 00:04:09.082 ************************************ 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57437 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57437 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57437 ']' 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.082 12:36:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.082 [2024-11-20 12:36:34.406868] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:09.082 [2024-11-20 12:36:34.406959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57437 ] 00:04:09.082 [2024-11-20 12:36:34.556866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.340 [2024-11-20 12:36:34.650226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.907 [2024-11-20 12:36:35.214514] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:09.907 request: 00:04:09.907 { 00:04:09.907 "trtype": "tcp", 00:04:09.907 "method": "nvmf_get_transports", 00:04:09.907 "req_id": 1 00:04:09.907 } 00:04:09.907 Got JSON-RPC error response 00:04:09.907 response: 00:04:09.907 { 00:04:09.907 "code": -19, 00:04:09.907 "message": "No such device" 00:04:09.907 } 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.907 [2024-11-20 12:36:35.222605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.907 12:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:09.907 { 00:04:09.907 "subsystems": [ 00:04:09.907 { 00:04:09.907 "subsystem": "fsdev", 00:04:09.907 "config": [ 00:04:09.907 { 00:04:09.907 "method": "fsdev_set_opts", 00:04:09.907 "params": { 00:04:09.907 "fsdev_io_pool_size": 65535, 00:04:09.907 "fsdev_io_cache_size": 256 00:04:09.907 } 00:04:09.907 } 00:04:09.907 ] 00:04:09.907 }, 00:04:09.907 { 00:04:09.907 "subsystem": "keyring", 00:04:09.907 "config": [] 00:04:09.907 }, 00:04:09.907 { 00:04:09.907 "subsystem": "iobuf", 00:04:09.907 "config": [ 00:04:09.907 { 00:04:09.907 "method": "iobuf_set_options", 00:04:09.907 "params": { 00:04:09.907 "small_pool_count": 8192, 00:04:09.907 "large_pool_count": 1024, 00:04:09.907 "small_bufsize": 8192, 00:04:09.907 "large_bufsize": 135168, 00:04:09.907 "enable_numa": false 00:04:09.907 } 00:04:09.907 } 00:04:09.907 ] 00:04:09.907 }, 00:04:09.907 { 00:04:09.907 "subsystem": "sock", 00:04:09.907 "config": [ 00:04:09.907 { 00:04:09.907 "method": "sock_set_default_impl", 00:04:09.907 "params": { 00:04:09.907 "impl_name": "posix" 00:04:09.907 } 00:04:09.907 }, 00:04:09.907 { 00:04:09.907 "method": "sock_impl_set_options", 00:04:09.907 "params": { 00:04:09.907 "impl_name": "ssl", 00:04:09.907 "recv_buf_size": 4096, 00:04:09.907 "send_buf_size": 4096, 00:04:09.907 "enable_recv_pipe": true, 00:04:09.907 "enable_quickack": false, 00:04:09.907 "enable_placement_id": 0, 00:04:09.907 "enable_zerocopy_send_server": true, 00:04:09.907 "enable_zerocopy_send_client": false, 00:04:09.907 "zerocopy_threshold": 0, 00:04:09.907 "tls_version": 0, 00:04:09.907 "enable_ktls": false 00:04:09.907 } 00:04:09.907 }, 00:04:09.907 { 00:04:09.907 "method": "sock_impl_set_options", 00:04:09.907 "params": { 00:04:09.907 "impl_name": "posix", 00:04:09.907 "recv_buf_size": 2097152, 00:04:09.907 "send_buf_size": 2097152, 00:04:09.907 "enable_recv_pipe": true, 00:04:09.907 "enable_quickack": false, 00:04:09.907 "enable_placement_id": 0, 00:04:09.907 "enable_zerocopy_send_server": true, 00:04:09.907 "enable_zerocopy_send_client": false, 00:04:09.907 "zerocopy_threshold": 0, 00:04:09.907 "tls_version": 0, 00:04:09.907 "enable_ktls": false 00:04:09.907 } 00:04:09.907 } 00:04:09.907 ] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "vmd", 00:04:09.908 "config": [] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "accel", 00:04:09.908 "config": [ 00:04:09.908 { 00:04:09.908 "method": "accel_set_options", 00:04:09.908 "params": { 00:04:09.908 "small_cache_size": 128, 00:04:09.908 "large_cache_size": 16, 00:04:09.908 "task_count": 2048, 00:04:09.908 "sequence_count": 2048, 00:04:09.908 "buf_count": 2048 00:04:09.908 } 00:04:09.908 } 00:04:09.908 ] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "bdev", 00:04:09.908 "config": [ 00:04:09.908 { 00:04:09.908 "method": "bdev_set_options", 00:04:09.908 "params": { 00:04:09.908 "bdev_io_pool_size": 65535, 00:04:09.908 "bdev_io_cache_size": 256, 00:04:09.908 "bdev_auto_examine": true, 00:04:09.908 "iobuf_small_cache_size": 128, 00:04:09.908 "iobuf_large_cache_size": 16 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "bdev_raid_set_options", 00:04:09.908 "params": { 00:04:09.908 "process_window_size_kb": 1024, 00:04:09.908 "process_max_bandwidth_mb_sec": 0 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "bdev_iscsi_set_options", 00:04:09.908 "params": { 00:04:09.908 "timeout_sec": 30 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "bdev_nvme_set_options", 00:04:09.908 "params": { 00:04:09.908 "action_on_timeout": "none", 00:04:09.908 "timeout_us": 0, 00:04:09.908 "timeout_admin_us": 0, 00:04:09.908 "keep_alive_timeout_ms": 10000, 00:04:09.908 "arbitration_burst": 0, 00:04:09.908 "low_priority_weight": 0, 00:04:09.908 "medium_priority_weight": 0, 00:04:09.908 "high_priority_weight": 0, 00:04:09.908 "nvme_adminq_poll_period_us": 10000, 00:04:09.908 "nvme_ioq_poll_period_us": 0, 00:04:09.908 "io_queue_requests": 0, 00:04:09.908 "delay_cmd_submit": true, 00:04:09.908 "transport_retry_count": 4, 00:04:09.908 "bdev_retry_count": 3, 00:04:09.908 "transport_ack_timeout": 0, 00:04:09.908 "ctrlr_loss_timeout_sec": 0, 00:04:09.908 "reconnect_delay_sec": 0, 00:04:09.908 "fast_io_fail_timeout_sec": 0, 00:04:09.908 "disable_auto_failback": false, 00:04:09.908 "generate_uuids": false, 00:04:09.908 "transport_tos": 0, 00:04:09.908 "nvme_error_stat": false, 00:04:09.908 "rdma_srq_size": 0, 00:04:09.908 "io_path_stat": false, 00:04:09.908 "allow_accel_sequence": false, 00:04:09.908 "rdma_max_cq_size": 0, 00:04:09.908 "rdma_cm_event_timeout_ms": 0, 00:04:09.908 "dhchap_digests": [ 00:04:09.908 "sha256", 00:04:09.908 "sha384", 00:04:09.908 "sha512" 00:04:09.908 ], 00:04:09.908 "dhchap_dhgroups": [ 00:04:09.908 "null", 00:04:09.908 "ffdhe2048", 00:04:09.908 "ffdhe3072", 00:04:09.908 "ffdhe4096", 00:04:09.908 "ffdhe6144", 00:04:09.908 "ffdhe8192" 00:04:09.908 ] 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "bdev_nvme_set_hotplug", 00:04:09.908 "params": { 00:04:09.908 "period_us": 100000, 00:04:09.908 "enable": false 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "bdev_wait_for_examine" 00:04:09.908 } 00:04:09.908 ] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "scsi", 00:04:09.908 "config": null 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "scheduler", 00:04:09.908 "config": [ 00:04:09.908 { 00:04:09.908 "method": "framework_set_scheduler", 00:04:09.908 "params": { 00:04:09.908 "name": "static" 00:04:09.908 } 00:04:09.908 } 00:04:09.908 ] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "vhost_scsi", 00:04:09.908 "config": [] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "vhost_blk", 00:04:09.908 "config": [] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "ublk", 00:04:09.908 "config": [] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "nbd", 00:04:09.908 "config": [] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "nvmf", 00:04:09.908 "config": [ 00:04:09.908 { 00:04:09.908 "method": "nvmf_set_config", 00:04:09.908 "params": { 00:04:09.908 "discovery_filter": "match_any", 00:04:09.908 "admin_cmd_passthru": { 00:04:09.908 "identify_ctrlr": false 00:04:09.908 }, 00:04:09.908 "dhchap_digests": [ 00:04:09.908 "sha256", 00:04:09.908 "sha384", 00:04:09.908 "sha512" 00:04:09.908 ], 00:04:09.908 "dhchap_dhgroups": [ 00:04:09.908 "null", 00:04:09.908 "ffdhe2048", 00:04:09.908 "ffdhe3072", 00:04:09.908 "ffdhe4096", 00:04:09.908 "ffdhe6144", 00:04:09.908 "ffdhe8192" 00:04:09.908 ] 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "nvmf_set_max_subsystems", 00:04:09.908 "params": { 00:04:09.908 "max_subsystems": 1024 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "nvmf_set_crdt", 00:04:09.908 "params": { 00:04:09.908 "crdt1": 0, 00:04:09.908 "crdt2": 0, 00:04:09.908 "crdt3": 0 00:04:09.908 } 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "method": "nvmf_create_transport", 00:04:09.908 "params": { 00:04:09.908 "trtype": "TCP", 00:04:09.908 "max_queue_depth": 128, 00:04:09.908 "max_io_qpairs_per_ctrlr": 127, 00:04:09.908 "in_capsule_data_size": 4096, 00:04:09.908 "max_io_size": 131072, 00:04:09.908 "io_unit_size": 131072, 00:04:09.908 "max_aq_depth": 128, 00:04:09.908 "num_shared_buffers": 511, 00:04:09.908 "buf_cache_size": 4294967295, 00:04:09.908 "dif_insert_or_strip": false, 00:04:09.908 "zcopy": false, 00:04:09.908 "c2h_success": true, 00:04:09.908 "sock_priority": 0, 00:04:09.908 "abort_timeout_sec": 1, 00:04:09.908 "ack_timeout": 0, 00:04:09.908 "data_wr_pool_size": 0 00:04:09.908 } 00:04:09.908 } 00:04:09.908 ] 00:04:09.908 }, 00:04:09.908 { 00:04:09.908 "subsystem": "iscsi", 00:04:09.908 "config": [ 00:04:09.908 { 00:04:09.908 "method": "iscsi_set_options", 00:04:09.908 "params": { 00:04:09.908 "node_base": "iqn.2016-06.io.spdk", 00:04:09.908 "max_sessions": 128, 00:04:09.908 "max_connections_per_session": 2, 00:04:09.908 "max_queue_depth": 64, 00:04:09.909 "default_time2wait": 2, 00:04:09.909 "default_time2retain": 20, 00:04:09.909 "first_burst_length": 8192, 00:04:09.909 "immediate_data": true, 00:04:09.909 "allow_duplicated_isid": false, 00:04:09.909 "error_recovery_level": 0, 00:04:09.909 "nop_timeout": 60, 00:04:09.909 "nop_in_interval": 30, 00:04:09.909 "disable_chap": false, 00:04:09.909 "require_chap": false, 00:04:09.909 "mutual_chap": false, 00:04:09.909 "chap_group": 0, 00:04:09.909 "max_large_datain_per_connection": 64, 00:04:09.909 "max_r2t_per_connection": 4, 00:04:09.909 "pdu_pool_size": 36864, 00:04:09.909 "immediate_data_pool_size": 16384, 00:04:09.909 "data_out_pool_size": 2048 00:04:09.909 } 00:04:09.909 } 00:04:09.909 ] 00:04:09.909 } 00:04:09.909 ] 00:04:09.909 } 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57437 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57437 ']' 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57437 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57437 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.909 killing process with pid 57437 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57437' 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57437 00:04:09.909 12:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57437 00:04:11.283 12:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57477 00:04:11.283 12:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:11.283 12:36:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57477 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57477 ']' 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57477 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57477 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.547 killing process with pid 57477 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57477' 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57477 00:04:16.547 12:36:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57477 00:04:17.481 12:36:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.481 12:36:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.481 00:04:17.481 real 0m8.610s 00:04:17.481 user 0m8.152s 00:04:17.481 sys 0m0.652s 00:04:17.481 12:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.481 ************************************ 00:04:17.481 END TEST skip_rpc_with_json 00:04:17.481 12:36:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.481 ************************************ 00:04:17.740 12:36:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:17.740 12:36:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.740 12:36:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.740 12:36:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.740 ************************************ 00:04:17.740 START TEST skip_rpc_with_delay 00:04:17.740 ************************************ 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.740 [2024-11-20 12:36:43.092763] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.740 ************************************ 00:04:17.740 END TEST skip_rpc_with_delay 00:04:17.740 ************************************ 00:04:17.740 00:04:17.740 real 0m0.130s 00:04:17.740 user 0m0.067s 00:04:17.740 sys 0m0.061s 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.740 12:36:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:17.740 12:36:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:17.740 12:36:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:17.740 12:36:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:17.741 12:36:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.741 12:36:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.741 12:36:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.741 ************************************ 00:04:17.741 START TEST exit_on_failed_rpc_init 00:04:17.741 ************************************ 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57599 00:04:17.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57599 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57599 ']' 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.741 12:36:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.999 [2024-11-20 12:36:43.285886] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:17.999 [2024-11-20 12:36:43.286013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57599 ] 00:04:17.999 [2024-11-20 12:36:43.442114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.257 [2024-11-20 12:36:43.539518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.823 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.823 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:18.823 12:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.823 12:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:18.824 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.824 [2024-11-20 12:36:44.209649] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:18.824 [2024-11-20 12:36:44.209787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57617 ] 00:04:19.082 [2024-11-20 12:36:44.368332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.082 [2024-11-20 12:36:44.472227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.082 [2024-11-20 12:36:44.472322] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:19.082 [2024-11-20 12:36:44.472336] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:19.082 [2024-11-20 12:36:44.472349] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57599 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57599 ']' 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57599 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57599 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.340 killing process with pid 57599 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57599' 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57599 00:04:19.340 12:36:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57599 00:04:20.713 00:04:20.713 real 0m2.727s 00:04:20.713 user 0m3.030s 00:04:20.713 sys 0m0.434s 00:04:20.713 12:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.713 ************************************ 00:04:20.713 END TEST exit_on_failed_rpc_init 00:04:20.713 ************************************ 00:04:20.713 12:36:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.713 12:36:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.713 00:04:20.713 real 0m18.098s 00:04:20.713 user 0m17.250s 00:04:20.713 sys 0m1.649s 00:04:20.713 12:36:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.713 ************************************ 00:04:20.713 END TEST skip_rpc 00:04:20.713 ************************************ 00:04:20.713 12:36:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.713 12:36:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:20.713 12:36:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.713 12:36:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.713 12:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.713 ************************************ 00:04:20.713 START TEST rpc_client 00:04:20.713 ************************************ 00:04:20.713 12:36:46 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:20.713 * Looking for test storage... 00:04:20.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:20.713 12:36:46 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.713 12:36:46 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.713 12:36:46 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.713 12:36:46 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:20.713 12:36:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.714 12:36:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:20.714 12:36:46 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.714 12:36:46 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.714 --rc genhtml_branch_coverage=1 00:04:20.714 --rc genhtml_function_coverage=1 00:04:20.714 --rc genhtml_legend=1 00:04:20.714 --rc geninfo_all_blocks=1 00:04:20.714 --rc geninfo_unexecuted_blocks=1 00:04:20.714 00:04:20.714 ' 00:04:20.714 12:36:46 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.714 --rc genhtml_branch_coverage=1 00:04:20.714 --rc genhtml_function_coverage=1 00:04:20.714 --rc genhtml_legend=1 00:04:20.714 --rc geninfo_all_blocks=1 00:04:20.714 --rc geninfo_unexecuted_blocks=1 00:04:20.714 00:04:20.714 ' 00:04:20.714 12:36:46 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.714 --rc genhtml_branch_coverage=1 00:04:20.714 --rc genhtml_function_coverage=1 00:04:20.714 --rc genhtml_legend=1 00:04:20.714 --rc geninfo_all_blocks=1 00:04:20.714 --rc geninfo_unexecuted_blocks=1 00:04:20.714 00:04:20.714 ' 00:04:20.714 12:36:46 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.714 --rc genhtml_branch_coverage=1 00:04:20.714 --rc genhtml_function_coverage=1 00:04:20.714 --rc genhtml_legend=1 00:04:20.714 --rc geninfo_all_blocks=1 00:04:20.714 --rc geninfo_unexecuted_blocks=1 00:04:20.714 00:04:20.714 ' 00:04:20.714 12:36:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:20.714 OK 00:04:20.714 12:36:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:20.714 00:04:20.714 real 0m0.194s 00:04:20.714 user 0m0.108s 00:04:20.714 sys 0m0.093s 00:04:20.714 12:36:46 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.714 12:36:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:20.714 ************************************ 00:04:20.714 END TEST rpc_client 00:04:20.714 ************************************ 00:04:20.973 12:36:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:20.973 12:36:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.973 12:36:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.973 12:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.973 ************************************ 00:04:20.973 START TEST json_config 00:04:20.973 ************************************ 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.973 12:36:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.973 12:36:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.973 12:36:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.973 12:36:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.973 12:36:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.973 12:36:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.973 12:36:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.973 12:36:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:20.973 12:36:46 json_config -- scripts/common.sh@345 -- # : 1 00:04:20.973 12:36:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.973 12:36:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.973 12:36:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:20.973 12:36:46 json_config -- scripts/common.sh@353 -- # local d=1 00:04:20.973 12:36:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.973 12:36:46 json_config -- scripts/common.sh@355 -- # echo 1 00:04:20.973 12:36:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.973 12:36:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@353 -- # local d=2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.973 12:36:46 json_config -- scripts/common.sh@355 -- # echo 2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.973 12:36:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.973 12:36:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.973 12:36:46 json_config -- scripts/common.sh@368 -- # return 0 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.973 --rc genhtml_branch_coverage=1 00:04:20.973 --rc genhtml_function_coverage=1 00:04:20.973 --rc genhtml_legend=1 00:04:20.973 --rc geninfo_all_blocks=1 00:04:20.973 --rc geninfo_unexecuted_blocks=1 00:04:20.973 00:04:20.973 ' 00:04:20.973 12:36:46 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.973 --rc genhtml_branch_coverage=1 00:04:20.974 --rc genhtml_function_coverage=1 00:04:20.974 --rc genhtml_legend=1 00:04:20.974 --rc geninfo_all_blocks=1 00:04:20.974 --rc geninfo_unexecuted_blocks=1 00:04:20.974 00:04:20.974 ' 00:04:20.974 12:36:46 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.974 --rc genhtml_branch_coverage=1 00:04:20.974 --rc genhtml_function_coverage=1 00:04:20.974 --rc genhtml_legend=1 00:04:20.974 --rc geninfo_all_blocks=1 00:04:20.974 --rc geninfo_unexecuted_blocks=1 00:04:20.974 00:04:20.974 ' 00:04:20.974 12:36:46 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.974 --rc genhtml_branch_coverage=1 00:04:20.974 --rc genhtml_function_coverage=1 00:04:20.974 --rc genhtml_legend=1 00:04:20.974 --rc geninfo_all_blocks=1 00:04:20.974 --rc geninfo_unexecuted_blocks=1 00:04:20.974 00:04:20.974 ' 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fe012ac3-ab77-41b9-bd8c-e89873fa6c26 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=fe012ac3-ab77-41b9-bd8c-e89873fa6c26 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:20.974 12:36:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:20.974 12:36:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.974 12:36:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.974 12:36:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.974 12:36:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.974 12:36:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.974 12:36:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.974 12:36:46 json_config -- paths/export.sh@5 -- # export PATH 00:04:20.974 12:36:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@51 -- # : 0 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:20.974 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:20.974 12:36:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:20.974 WARNING: No tests are enabled so not running JSON configuration tests 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:20.974 12:36:46 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:20.974 ************************************ 00:04:20.974 END TEST json_config 00:04:20.974 ************************************ 00:04:20.974 00:04:20.974 real 0m0.140s 00:04:20.974 user 0m0.087s 00:04:20.974 sys 0m0.056s 00:04:20.974 12:36:46 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.974 12:36:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.974 12:36:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:20.974 12:36:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.974 12:36:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.974 12:36:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.974 ************************************ 00:04:20.974 START TEST json_config_extra_key 00:04:20.974 ************************************ 00:04:20.974 12:36:46 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.232 12:36:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.232 --rc genhtml_branch_coverage=1 00:04:21.232 --rc genhtml_function_coverage=1 00:04:21.232 --rc genhtml_legend=1 00:04:21.232 --rc geninfo_all_blocks=1 00:04:21.232 --rc geninfo_unexecuted_blocks=1 00:04:21.232 00:04:21.232 ' 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.232 --rc genhtml_branch_coverage=1 00:04:21.232 --rc genhtml_function_coverage=1 00:04:21.232 --rc genhtml_legend=1 00:04:21.232 --rc geninfo_all_blocks=1 00:04:21.232 --rc geninfo_unexecuted_blocks=1 00:04:21.232 00:04:21.232 ' 00:04:21.232 12:36:46 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.232 --rc genhtml_branch_coverage=1 00:04:21.232 --rc genhtml_function_coverage=1 00:04:21.232 --rc genhtml_legend=1 00:04:21.232 --rc geninfo_all_blocks=1 00:04:21.232 --rc geninfo_unexecuted_blocks=1 00:04:21.232 00:04:21.232 ' 00:04:21.233 12:36:46 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.233 --rc genhtml_branch_coverage=1 00:04:21.233 --rc genhtml_function_coverage=1 00:04:21.233 --rc genhtml_legend=1 00:04:21.233 --rc geninfo_all_blocks=1 00:04:21.233 --rc geninfo_unexecuted_blocks=1 00:04:21.233 00:04:21.233 ' 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fe012ac3-ab77-41b9-bd8c-e89873fa6c26 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=fe012ac3-ab77-41b9-bd8c-e89873fa6c26 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.233 12:36:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.233 12:36:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.233 12:36:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.233 12:36:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.233 12:36:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.233 12:36:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.233 12:36:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.233 12:36:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:21.233 12:36:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.233 12:36:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:21.233 INFO: launching applications... 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:21.233 12:36:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57805 00:04:21.233 Waiting for target to run... 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57805 /var/tmp/spdk_tgt.sock 00:04:21.233 12:36:46 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57805 ']' 00:04:21.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.233 12:36:46 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.233 12:36:46 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.233 12:36:46 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.233 12:36:46 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.233 12:36:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.233 12:36:46 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:21.233 [2024-11-20 12:36:46.658306] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:21.233 [2024-11-20 12:36:46.658800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57805 ] 00:04:21.799 [2024-11-20 12:36:47.037549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.799 [2024-11-20 12:36:47.134640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.374 12:36:47 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.374 00:04:22.374 INFO: shutting down applications... 00:04:22.374 12:36:47 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:22.374 12:36:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:22.374 12:36:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57805 ]] 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57805 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57805 00:04:22.374 12:36:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.940 12:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.940 12:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.940 12:36:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57805 00:04:22.940 12:36:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.197 12:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.197 12:36:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.197 12:36:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57805 00:04:23.197 12:36:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.762 12:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.762 12:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.762 12:36:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57805 00:04:23.762 12:36:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.328 12:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.328 12:36:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.328 12:36:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57805 00:04:24.328 SPDK target shutdown done 00:04:24.328 Success 00:04:24.328 12:36:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.328 12:36:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:24.328 12:36:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.328 12:36:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.328 12:36:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:24.328 ************************************ 00:04:24.328 END TEST json_config_extra_key 00:04:24.328 ************************************ 00:04:24.328 00:04:24.328 real 0m3.223s 00:04:24.328 user 0m2.760s 00:04:24.328 sys 0m0.452s 00:04:24.328 12:36:49 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.328 12:36:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.328 12:36:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.328 12:36:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.328 12:36:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.328 12:36:49 -- common/autotest_common.sh@10 -- # set +x 00:04:24.328 ************************************ 00:04:24.328 START TEST alias_rpc 00:04:24.328 ************************************ 00:04:24.328 12:36:49 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.328 * Looking for test storage... 00:04:24.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:24.328 12:36:49 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.328 12:36:49 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.328 12:36:49 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.587 12:36:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.587 --rc genhtml_branch_coverage=1 00:04:24.587 --rc genhtml_function_coverage=1 00:04:24.587 --rc genhtml_legend=1 00:04:24.587 --rc geninfo_all_blocks=1 00:04:24.587 --rc geninfo_unexecuted_blocks=1 00:04:24.587 00:04:24.587 ' 00:04:24.587 12:36:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.587 12:36:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57904 00:04:24.587 12:36:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57904 00:04:24.587 12:36:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57904 ']' 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.587 12:36:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.587 [2024-11-20 12:36:49.957620] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:24.587 [2024-11-20 12:36:49.957751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57904 ] 00:04:24.845 [2024-11-20 12:36:50.117100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.845 [2024-11-20 12:36:50.218187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.412 12:36:50 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.412 12:36:50 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:25.412 12:36:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:25.670 12:36:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57904 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57904 ']' 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57904 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57904 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.670 killing process with pid 57904 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57904' 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@973 -- # kill 57904 00:04:25.670 12:36:51 alias_rpc -- common/autotest_common.sh@978 -- # wait 57904 00:04:27.086 00:04:27.086 real 0m2.876s 00:04:27.086 user 0m2.957s 00:04:27.086 sys 0m0.420s 00:04:27.086 12:36:52 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.086 12:36:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.086 ************************************ 00:04:27.086 END TEST alias_rpc 00:04:27.086 ************************************ 00:04:27.344 12:36:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:27.344 12:36:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:27.344 12:36:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.344 12:36:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.344 12:36:52 -- common/autotest_common.sh@10 -- # set +x 00:04:27.344 ************************************ 00:04:27.344 START TEST spdkcli_tcp 00:04:27.344 ************************************ 00:04:27.344 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:27.344 * Looking for test storage... 00:04:27.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:27.344 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.344 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.344 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.344 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.344 12:36:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.345 12:36:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.345 --rc genhtml_branch_coverage=1 00:04:27.345 --rc genhtml_function_coverage=1 00:04:27.345 --rc genhtml_legend=1 00:04:27.345 --rc geninfo_all_blocks=1 00:04:27.345 --rc geninfo_unexecuted_blocks=1 00:04:27.345 00:04:27.345 ' 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.345 --rc genhtml_branch_coverage=1 00:04:27.345 --rc genhtml_function_coverage=1 00:04:27.345 --rc genhtml_legend=1 00:04:27.345 --rc geninfo_all_blocks=1 00:04:27.345 --rc geninfo_unexecuted_blocks=1 00:04:27.345 00:04:27.345 ' 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.345 --rc genhtml_branch_coverage=1 00:04:27.345 --rc genhtml_function_coverage=1 00:04:27.345 --rc genhtml_legend=1 00:04:27.345 --rc geninfo_all_blocks=1 00:04:27.345 --rc geninfo_unexecuted_blocks=1 00:04:27.345 00:04:27.345 ' 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.345 --rc genhtml_branch_coverage=1 00:04:27.345 --rc genhtml_function_coverage=1 00:04:27.345 --rc genhtml_legend=1 00:04:27.345 --rc geninfo_all_blocks=1 00:04:27.345 --rc geninfo_unexecuted_blocks=1 00:04:27.345 00:04:27.345 ' 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58000 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58000 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58000 ']' 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.345 12:36:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.345 12:36:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:27.603 [2024-11-20 12:36:52.864980] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:27.603 [2024-11-20 12:36:52.865098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58000 ] 00:04:27.603 [2024-11-20 12:36:53.020152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.860 [2024-11-20 12:36:53.124137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.860 [2024-11-20 12:36:53.124234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.426 12:36:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.426 12:36:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:28.426 12:36:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58011 00:04:28.426 12:36:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:28.426 12:36:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:28.426 [ 00:04:28.426 "bdev_malloc_delete", 00:04:28.426 "bdev_malloc_create", 00:04:28.426 "bdev_null_resize", 00:04:28.426 "bdev_null_delete", 00:04:28.426 "bdev_null_create", 00:04:28.426 "bdev_nvme_cuse_unregister", 00:04:28.426 "bdev_nvme_cuse_register", 00:04:28.426 "bdev_opal_new_user", 00:04:28.426 "bdev_opal_set_lock_state", 00:04:28.426 "bdev_opal_delete", 00:04:28.426 "bdev_opal_get_info", 00:04:28.426 "bdev_opal_create", 00:04:28.426 "bdev_nvme_opal_revert", 00:04:28.426 "bdev_nvme_opal_init", 00:04:28.426 "bdev_nvme_send_cmd", 00:04:28.426 "bdev_nvme_set_keys", 00:04:28.426 "bdev_nvme_get_path_iostat", 00:04:28.426 "bdev_nvme_get_mdns_discovery_info", 00:04:28.426 "bdev_nvme_stop_mdns_discovery", 00:04:28.426 "bdev_nvme_start_mdns_discovery", 00:04:28.426 "bdev_nvme_set_multipath_policy", 00:04:28.426 "bdev_nvme_set_preferred_path", 00:04:28.426 "bdev_nvme_get_io_paths", 00:04:28.426 "bdev_nvme_remove_error_injection", 00:04:28.426 "bdev_nvme_add_error_injection", 00:04:28.426 "bdev_nvme_get_discovery_info", 00:04:28.426 "bdev_nvme_stop_discovery", 00:04:28.426 "bdev_nvme_start_discovery", 00:04:28.426 "bdev_nvme_get_controller_health_info", 00:04:28.426 "bdev_nvme_disable_controller", 00:04:28.426 "bdev_nvme_enable_controller", 00:04:28.426 "bdev_nvme_reset_controller", 00:04:28.426 "bdev_nvme_get_transport_statistics", 00:04:28.426 "bdev_nvme_apply_firmware", 00:04:28.426 "bdev_nvme_detach_controller", 00:04:28.426 "bdev_nvme_get_controllers", 00:04:28.426 "bdev_nvme_attach_controller", 00:04:28.426 "bdev_nvme_set_hotplug", 00:04:28.426 "bdev_nvme_set_options", 00:04:28.426 "bdev_passthru_delete", 00:04:28.426 "bdev_passthru_create", 00:04:28.426 "bdev_lvol_set_parent_bdev", 00:04:28.426 "bdev_lvol_set_parent", 00:04:28.426 "bdev_lvol_check_shallow_copy", 00:04:28.426 "bdev_lvol_start_shallow_copy", 00:04:28.426 "bdev_lvol_grow_lvstore", 00:04:28.426 "bdev_lvol_get_lvols", 00:04:28.426 "bdev_lvol_get_lvstores", 00:04:28.426 "bdev_lvol_delete", 00:04:28.426 "bdev_lvol_set_read_only", 00:04:28.426 "bdev_lvol_resize", 00:04:28.426 "bdev_lvol_decouple_parent", 00:04:28.426 "bdev_lvol_inflate", 00:04:28.426 "bdev_lvol_rename", 00:04:28.426 "bdev_lvol_clone_bdev", 00:04:28.426 "bdev_lvol_clone", 00:04:28.426 "bdev_lvol_snapshot", 00:04:28.426 "bdev_lvol_create", 00:04:28.426 "bdev_lvol_delete_lvstore", 00:04:28.426 "bdev_lvol_rename_lvstore", 00:04:28.426 "bdev_lvol_create_lvstore", 00:04:28.426 "bdev_raid_set_options", 00:04:28.426 "bdev_raid_remove_base_bdev", 00:04:28.426 "bdev_raid_add_base_bdev", 00:04:28.426 "bdev_raid_delete", 00:04:28.426 "bdev_raid_create", 00:04:28.426 "bdev_raid_get_bdevs", 00:04:28.426 "bdev_error_inject_error", 00:04:28.427 "bdev_error_delete", 00:04:28.427 "bdev_error_create", 00:04:28.427 "bdev_split_delete", 00:04:28.427 "bdev_split_create", 00:04:28.427 "bdev_delay_delete", 00:04:28.427 "bdev_delay_create", 00:04:28.427 "bdev_delay_update_latency", 00:04:28.427 "bdev_zone_block_delete", 00:04:28.427 "bdev_zone_block_create", 00:04:28.427 "blobfs_create", 00:04:28.427 "blobfs_detect", 00:04:28.427 "blobfs_set_cache_size", 00:04:28.427 "bdev_xnvme_delete", 00:04:28.427 "bdev_xnvme_create", 00:04:28.427 "bdev_aio_delete", 00:04:28.427 "bdev_aio_rescan", 00:04:28.427 "bdev_aio_create", 00:04:28.427 "bdev_ftl_set_property", 00:04:28.427 "bdev_ftl_get_properties", 00:04:28.427 "bdev_ftl_get_stats", 00:04:28.427 "bdev_ftl_unmap", 00:04:28.427 "bdev_ftl_unload", 00:04:28.427 "bdev_ftl_delete", 00:04:28.427 "bdev_ftl_load", 00:04:28.427 "bdev_ftl_create", 00:04:28.427 "bdev_virtio_attach_controller", 00:04:28.427 "bdev_virtio_scsi_get_devices", 00:04:28.427 "bdev_virtio_detach_controller", 00:04:28.427 "bdev_virtio_blk_set_hotplug", 00:04:28.427 "bdev_iscsi_delete", 00:04:28.427 "bdev_iscsi_create", 00:04:28.427 "bdev_iscsi_set_options", 00:04:28.427 "accel_error_inject_error", 00:04:28.427 "ioat_scan_accel_module", 00:04:28.427 "dsa_scan_accel_module", 00:04:28.427 "iaa_scan_accel_module", 00:04:28.427 "keyring_file_remove_key", 00:04:28.427 "keyring_file_add_key", 00:04:28.427 "keyring_linux_set_options", 00:04:28.427 "fsdev_aio_delete", 00:04:28.427 "fsdev_aio_create", 00:04:28.427 "iscsi_get_histogram", 00:04:28.427 "iscsi_enable_histogram", 00:04:28.427 "iscsi_set_options", 00:04:28.427 "iscsi_get_auth_groups", 00:04:28.427 "iscsi_auth_group_remove_secret", 00:04:28.427 "iscsi_auth_group_add_secret", 00:04:28.427 "iscsi_delete_auth_group", 00:04:28.427 "iscsi_create_auth_group", 00:04:28.427 "iscsi_set_discovery_auth", 00:04:28.427 "iscsi_get_options", 00:04:28.427 "iscsi_target_node_request_logout", 00:04:28.427 "iscsi_target_node_set_redirect", 00:04:28.427 "iscsi_target_node_set_auth", 00:04:28.427 "iscsi_target_node_add_lun", 00:04:28.427 "iscsi_get_stats", 00:04:28.427 "iscsi_get_connections", 00:04:28.427 "iscsi_portal_group_set_auth", 00:04:28.427 "iscsi_start_portal_group", 00:04:28.427 "iscsi_delete_portal_group", 00:04:28.427 "iscsi_create_portal_group", 00:04:28.427 "iscsi_get_portal_groups", 00:04:28.427 "iscsi_delete_target_node", 00:04:28.427 "iscsi_target_node_remove_pg_ig_maps", 00:04:28.427 "iscsi_target_node_add_pg_ig_maps", 00:04:28.427 "iscsi_create_target_node", 00:04:28.427 "iscsi_get_target_nodes", 00:04:28.427 "iscsi_delete_initiator_group", 00:04:28.427 "iscsi_initiator_group_remove_initiators", 00:04:28.427 "iscsi_initiator_group_add_initiators", 00:04:28.427 "iscsi_create_initiator_group", 00:04:28.427 "iscsi_get_initiator_groups", 00:04:28.427 "nvmf_set_crdt", 00:04:28.427 "nvmf_set_config", 00:04:28.427 "nvmf_set_max_subsystems", 00:04:28.427 "nvmf_stop_mdns_prr", 00:04:28.427 "nvmf_publish_mdns_prr", 00:04:28.427 "nvmf_subsystem_get_listeners", 00:04:28.427 "nvmf_subsystem_get_qpairs", 00:04:28.427 "nvmf_subsystem_get_controllers", 00:04:28.427 "nvmf_get_stats", 00:04:28.427 "nvmf_get_transports", 00:04:28.427 "nvmf_create_transport", 00:04:28.427 "nvmf_get_targets", 00:04:28.427 "nvmf_delete_target", 00:04:28.427 "nvmf_create_target", 00:04:28.427 "nvmf_subsystem_allow_any_host", 00:04:28.427 "nvmf_subsystem_set_keys", 00:04:28.427 "nvmf_subsystem_remove_host", 00:04:28.427 "nvmf_subsystem_add_host", 00:04:28.427 "nvmf_ns_remove_host", 00:04:28.427 "nvmf_ns_add_host", 00:04:28.427 "nvmf_subsystem_remove_ns", 00:04:28.427 "nvmf_subsystem_set_ns_ana_group", 00:04:28.427 "nvmf_subsystem_add_ns", 00:04:28.427 "nvmf_subsystem_listener_set_ana_state", 00:04:28.427 "nvmf_discovery_get_referrals", 00:04:28.427 "nvmf_discovery_remove_referral", 00:04:28.427 "nvmf_discovery_add_referral", 00:04:28.427 "nvmf_subsystem_remove_listener", 00:04:28.427 "nvmf_subsystem_add_listener", 00:04:28.427 "nvmf_delete_subsystem", 00:04:28.427 "nvmf_create_subsystem", 00:04:28.427 "nvmf_get_subsystems", 00:04:28.427 "env_dpdk_get_mem_stats", 00:04:28.427 "nbd_get_disks", 00:04:28.427 "nbd_stop_disk", 00:04:28.427 "nbd_start_disk", 00:04:28.427 "ublk_recover_disk", 00:04:28.427 "ublk_get_disks", 00:04:28.427 "ublk_stop_disk", 00:04:28.427 "ublk_start_disk", 00:04:28.427 "ublk_destroy_target", 00:04:28.427 "ublk_create_target", 00:04:28.427 "virtio_blk_create_transport", 00:04:28.427 "virtio_blk_get_transports", 00:04:28.427 "vhost_controller_set_coalescing", 00:04:28.427 "vhost_get_controllers", 00:04:28.427 "vhost_delete_controller", 00:04:28.427 "vhost_create_blk_controller", 00:04:28.427 "vhost_scsi_controller_remove_target", 00:04:28.427 "vhost_scsi_controller_add_target", 00:04:28.427 "vhost_start_scsi_controller", 00:04:28.427 "vhost_create_scsi_controller", 00:04:28.427 "thread_set_cpumask", 00:04:28.427 "scheduler_set_options", 00:04:28.427 "framework_get_governor", 00:04:28.427 "framework_get_scheduler", 00:04:28.427 "framework_set_scheduler", 00:04:28.427 "framework_get_reactors", 00:04:28.427 "thread_get_io_channels", 00:04:28.427 "thread_get_pollers", 00:04:28.427 "thread_get_stats", 00:04:28.427 "framework_monitor_context_switch", 00:04:28.427 "spdk_kill_instance", 00:04:28.427 "log_enable_timestamps", 00:04:28.427 "log_get_flags", 00:04:28.427 "log_clear_flag", 00:04:28.427 "log_set_flag", 00:04:28.427 "log_get_level", 00:04:28.427 "log_set_level", 00:04:28.427 "log_get_print_level", 00:04:28.427 "log_set_print_level", 00:04:28.427 "framework_enable_cpumask_locks", 00:04:28.427 "framework_disable_cpumask_locks", 00:04:28.427 "framework_wait_init", 00:04:28.427 "framework_start_init", 00:04:28.427 "scsi_get_devices", 00:04:28.427 "bdev_get_histogram", 00:04:28.427 "bdev_enable_histogram", 00:04:28.427 "bdev_set_qos_limit", 00:04:28.427 "bdev_set_qd_sampling_period", 00:04:28.427 "bdev_get_bdevs", 00:04:28.427 "bdev_reset_iostat", 00:04:28.427 "bdev_get_iostat", 00:04:28.427 "bdev_examine", 00:04:28.427 "bdev_wait_for_examine", 00:04:28.427 "bdev_set_options", 00:04:28.427 "accel_get_stats", 00:04:28.427 "accel_set_options", 00:04:28.427 "accel_set_driver", 00:04:28.427 "accel_crypto_key_destroy", 00:04:28.427 "accel_crypto_keys_get", 00:04:28.427 "accel_crypto_key_create", 00:04:28.427 "accel_assign_opc", 00:04:28.427 "accel_get_module_info", 00:04:28.427 "accel_get_opc_assignments", 00:04:28.427 "vmd_rescan", 00:04:28.427 "vmd_remove_device", 00:04:28.427 "vmd_enable", 00:04:28.427 "sock_get_default_impl", 00:04:28.427 "sock_set_default_impl", 00:04:28.427 "sock_impl_set_options", 00:04:28.427 "sock_impl_get_options", 00:04:28.427 "iobuf_get_stats", 00:04:28.428 "iobuf_set_options", 00:04:28.428 "keyring_get_keys", 00:04:28.428 "framework_get_pci_devices", 00:04:28.428 "framework_get_config", 00:04:28.428 "framework_get_subsystems", 00:04:28.428 "fsdev_set_opts", 00:04:28.428 "fsdev_get_opts", 00:04:28.428 "trace_get_info", 00:04:28.428 "trace_get_tpoint_group_mask", 00:04:28.428 "trace_disable_tpoint_group", 00:04:28.428 "trace_enable_tpoint_group", 00:04:28.428 "trace_clear_tpoint_mask", 00:04:28.428 "trace_set_tpoint_mask", 00:04:28.428 "notify_get_notifications", 00:04:28.428 "notify_get_types", 00:04:28.428 "spdk_get_version", 00:04:28.428 "rpc_get_methods" 00:04:28.428 ] 00:04:28.428 12:36:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:28.428 12:36:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.428 12:36:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.686 12:36:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:28.686 12:36:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58000 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58000 ']' 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58000 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58000 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58000' 00:04:28.686 killing process with pid 58000 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58000 00:04:28.686 12:36:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58000 00:04:30.061 00:04:30.061 real 0m2.892s 00:04:30.061 user 0m5.189s 00:04:30.061 sys 0m0.453s 00:04:30.061 12:36:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.061 12:36:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.061 ************************************ 00:04:30.061 END TEST spdkcli_tcp 00:04:30.061 ************************************ 00:04:30.320 12:36:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.320 12:36:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.320 12:36:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.320 12:36:55 -- common/autotest_common.sh@10 -- # set +x 00:04:30.320 ************************************ 00:04:30.320 START TEST dpdk_mem_utility 00:04:30.320 ************************************ 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.320 * Looking for test storage... 00:04:30.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.320 12:36:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.320 --rc genhtml_branch_coverage=1 00:04:30.320 --rc genhtml_function_coverage=1 00:04:30.320 --rc genhtml_legend=1 00:04:30.320 --rc geninfo_all_blocks=1 00:04:30.320 --rc geninfo_unexecuted_blocks=1 00:04:30.320 00:04:30.320 ' 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.320 --rc genhtml_branch_coverage=1 00:04:30.320 --rc genhtml_function_coverage=1 00:04:30.320 --rc genhtml_legend=1 00:04:30.320 --rc geninfo_all_blocks=1 00:04:30.320 --rc geninfo_unexecuted_blocks=1 00:04:30.320 00:04:30.320 ' 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.320 --rc genhtml_branch_coverage=1 00:04:30.320 --rc genhtml_function_coverage=1 00:04:30.320 --rc genhtml_legend=1 00:04:30.320 --rc geninfo_all_blocks=1 00:04:30.320 --rc geninfo_unexecuted_blocks=1 00:04:30.320 00:04:30.320 ' 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.320 --rc genhtml_branch_coverage=1 00:04:30.320 --rc genhtml_function_coverage=1 00:04:30.320 --rc genhtml_legend=1 00:04:30.320 --rc geninfo_all_blocks=1 00:04:30.320 --rc geninfo_unexecuted_blocks=1 00:04:30.320 00:04:30.320 ' 00:04:30.320 12:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:30.320 12:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58105 00:04:30.320 12:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58105 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58105 ']' 00:04:30.320 12:36:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.320 12:36:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.320 [2024-11-20 12:36:55.812993] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:30.320 [2024-11-20 12:36:55.813120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58105 ] 00:04:30.579 [2024-11-20 12:36:55.968853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.579 [2024-11-20 12:36:56.071904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.515 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.515 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:31.515 12:36:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.515 12:36:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.515 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.515 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.515 { 00:04:31.515 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.515 } 00:04:31.515 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.515 12:36:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:31.515 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:31.515 1 heaps totaling size 816.000000 MiB 00:04:31.515 size: 816.000000 MiB heap id: 0 00:04:31.515 end heaps---------- 00:04:31.515 9 mempools totaling size 595.772034 MiB 00:04:31.515 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:31.515 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:31.515 size: 92.545471 MiB name: bdev_io_58105 00:04:31.515 size: 50.003479 MiB name: msgpool_58105 00:04:31.515 size: 36.509338 MiB name: fsdev_io_58105 00:04:31.515 size: 21.763794 MiB name: PDU_Pool 00:04:31.515 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:31.515 size: 4.133484 MiB name: evtpool_58105 00:04:31.515 size: 0.026123 MiB name: Session_Pool 00:04:31.515 end mempools------- 00:04:31.515 6 memzones totaling size 4.142822 MiB 00:04:31.515 size: 1.000366 MiB name: RG_ring_0_58105 00:04:31.515 size: 1.000366 MiB name: RG_ring_1_58105 00:04:31.515 size: 1.000366 MiB name: RG_ring_4_58105 00:04:31.515 size: 1.000366 MiB name: RG_ring_5_58105 00:04:31.515 size: 0.125366 MiB name: RG_ring_2_58105 00:04:31.515 size: 0.015991 MiB name: RG_ring_3_58105 00:04:31.515 end memzones------- 00:04:31.515 12:36:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.515 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:04:31.515 list of free elements. size: 16.790161 MiB 00:04:31.515 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:31.515 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:31.515 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:31.515 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:31.515 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:31.515 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:31.515 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:31.515 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:31.515 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:31.515 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:31.515 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:31.515 element at address: 0x20001ac00000 with size: 0.560486 MiB 00:04:31.515 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:31.515 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:31.515 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:31.515 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:31.515 element at address: 0x200028000000 with size: 0.390686 MiB 00:04:31.515 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:31.515 list of standard malloc elements. size: 199.288940 MiB 00:04:31.515 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:31.515 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:31.515 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:31.515 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:31.515 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:31.515 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:31.515 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:31.515 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:31.515 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:31.515 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:31.515 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:31.515 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:31.515 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:31.516 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:31.517 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:31.517 element at address: 0x200028064140 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ae00 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:31.517 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:31.517 list of memzone associated elements. size: 599.920898 MiB 00:04:31.517 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:31.517 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.517 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:31.518 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:31.518 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:31.518 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58105_0 00:04:31.518 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:31.518 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58105_0 00:04:31.518 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:31.518 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58105_0 00:04:31.518 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:31.518 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:31.518 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:31.518 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.518 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:31.518 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58105_0 00:04:31.518 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:31.518 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58105 00:04:31.518 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:31.518 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58105 00:04:31.518 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:31.518 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.518 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:31.518 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.518 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:31.518 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.518 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:31.518 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.518 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:31.518 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58105 00:04:31.518 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:31.518 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58105 00:04:31.518 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:31.518 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58105 00:04:31.518 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:31.518 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58105 00:04:31.518 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:31.518 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58105 00:04:31.518 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:31.518 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58105 00:04:31.518 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:31.518 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.518 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:31.518 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.518 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:31.518 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.518 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:31.518 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58105 00:04:31.518 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:31.518 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58105 00:04:31.518 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:31.518 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.518 element at address: 0x200028064240 with size: 0.023804 MiB 00:04:31.518 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.518 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:31.518 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58105 00:04:31.518 element at address: 0x20002806a3c0 with size: 0.002502 MiB 00:04:31.518 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.518 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:31.518 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58105 00:04:31.518 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:31.518 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58105 00:04:31.518 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:31.518 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58105 00:04:31.518 element at address: 0x20002806af00 with size: 0.000366 MiB 00:04:31.518 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.518 12:36:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.518 12:36:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58105 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58105 ']' 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58105 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58105 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.518 killing process with pid 58105 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58105' 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58105 00:04:31.518 12:36:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58105 00:04:32.894 00:04:32.894 real 0m2.706s 00:04:32.894 user 0m2.633s 00:04:32.894 sys 0m0.437s 00:04:32.894 12:36:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.894 ************************************ 00:04:32.894 END TEST dpdk_mem_utility 00:04:32.894 ************************************ 00:04:32.894 12:36:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.894 12:36:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:32.894 12:36:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.894 12:36:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.894 12:36:58 -- common/autotest_common.sh@10 -- # set +x 00:04:32.894 ************************************ 00:04:32.894 START TEST event 00:04:32.894 ************************************ 00:04:32.894 12:36:58 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:32.894 * Looking for test storage... 00:04:33.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.153 12:36:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.153 12:36:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.153 12:36:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.153 12:36:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.153 12:36:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.153 12:36:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.153 12:36:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.153 12:36:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.153 12:36:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.153 12:36:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.153 12:36:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.153 12:36:58 event -- scripts/common.sh@344 -- # case "$op" in 00:04:33.153 12:36:58 event -- scripts/common.sh@345 -- # : 1 00:04:33.153 12:36:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.153 12:36:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.153 12:36:58 event -- scripts/common.sh@365 -- # decimal 1 00:04:33.153 12:36:58 event -- scripts/common.sh@353 -- # local d=1 00:04:33.153 12:36:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.153 12:36:58 event -- scripts/common.sh@355 -- # echo 1 00:04:33.153 12:36:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.153 12:36:58 event -- scripts/common.sh@366 -- # decimal 2 00:04:33.153 12:36:58 event -- scripts/common.sh@353 -- # local d=2 00:04:33.153 12:36:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.153 12:36:58 event -- scripts/common.sh@355 -- # echo 2 00:04:33.153 12:36:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.153 12:36:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.153 12:36:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.153 12:36:58 event -- scripts/common.sh@368 -- # return 0 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.153 --rc genhtml_branch_coverage=1 00:04:33.153 --rc genhtml_function_coverage=1 00:04:33.153 --rc genhtml_legend=1 00:04:33.153 --rc geninfo_all_blocks=1 00:04:33.153 --rc geninfo_unexecuted_blocks=1 00:04:33.153 00:04:33.153 ' 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.153 --rc genhtml_branch_coverage=1 00:04:33.153 --rc genhtml_function_coverage=1 00:04:33.153 --rc genhtml_legend=1 00:04:33.153 --rc geninfo_all_blocks=1 00:04:33.153 --rc geninfo_unexecuted_blocks=1 00:04:33.153 00:04:33.153 ' 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.153 --rc genhtml_branch_coverage=1 00:04:33.153 --rc genhtml_function_coverage=1 00:04:33.153 --rc genhtml_legend=1 00:04:33.153 --rc geninfo_all_blocks=1 00:04:33.153 --rc geninfo_unexecuted_blocks=1 00:04:33.153 00:04:33.153 ' 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.153 --rc genhtml_branch_coverage=1 00:04:33.153 --rc genhtml_function_coverage=1 00:04:33.153 --rc genhtml_legend=1 00:04:33.153 --rc geninfo_all_blocks=1 00:04:33.153 --rc geninfo_unexecuted_blocks=1 00:04:33.153 00:04:33.153 ' 00:04:33.153 12:36:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:33.153 12:36:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:33.153 12:36:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:33.153 12:36:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.153 12:36:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.153 ************************************ 00:04:33.153 START TEST event_perf 00:04:33.154 ************************************ 00:04:33.154 12:36:58 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.154 Running I/O for 1 seconds...[2024-11-20 12:36:58.521409] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:33.154 [2024-11-20 12:36:58.521526] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58197 ] 00:04:33.412 [2024-11-20 12:36:58.677698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.412 [2024-11-20 12:36:58.780543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.412 [2024-11-20 12:36:58.780828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.412 [2024-11-20 12:36:58.781033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.412 Running I/O for 1 seconds...[2024-11-20 12:36:58.781056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:34.787 00:04:34.787 lcore 0: 149763 00:04:34.787 lcore 1: 149757 00:04:34.787 lcore 2: 149757 00:04:34.787 lcore 3: 149760 00:04:34.787 done. 00:04:34.787 00:04:34.787 real 0m1.457s 00:04:34.787 user 0m4.265s 00:04:34.787 sys 0m0.068s 00:04:34.787 12:36:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.787 12:36:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.787 ************************************ 00:04:34.787 END TEST event_perf 00:04:34.787 ************************************ 00:04:34.787 12:36:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:34.787 12:36:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:34.787 12:36:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.787 12:36:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.787 ************************************ 00:04:34.787 START TEST event_reactor 00:04:34.787 ************************************ 00:04:34.787 12:36:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:34.787 [2024-11-20 12:37:00.025811] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:34.787 [2024-11-20 12:37:00.025961] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58242 ] 00:04:34.787 [2024-11-20 12:37:00.188836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.787 [2024-11-20 12:37:00.289435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.159 test_start 00:04:36.159 oneshot 00:04:36.159 tick 100 00:04:36.159 tick 100 00:04:36.159 tick 250 00:04:36.159 tick 100 00:04:36.159 tick 100 00:04:36.159 tick 250 00:04:36.159 tick 100 00:04:36.159 tick 500 00:04:36.159 tick 100 00:04:36.159 tick 100 00:04:36.159 tick 250 00:04:36.159 tick 100 00:04:36.159 tick 100 00:04:36.159 test_end 00:04:36.159 00:04:36.159 real 0m1.449s 00:04:36.159 user 0m1.277s 00:04:36.159 sys 0m0.064s 00:04:36.159 12:37:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.159 12:37:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:36.159 ************************************ 00:04:36.159 END TEST event_reactor 00:04:36.159 ************************************ 00:04:36.159 12:37:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.159 12:37:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.159 12:37:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.159 12:37:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.159 ************************************ 00:04:36.159 START TEST event_reactor_perf 00:04:36.159 ************************************ 00:04:36.159 12:37:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.159 [2024-11-20 12:37:01.515422] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:36.159 [2024-11-20 12:37:01.515539] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58273 ] 00:04:36.417 [2024-11-20 12:37:01.678109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.417 [2024-11-20 12:37:01.776535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.790 test_start 00:04:37.790 test_end 00:04:37.790 Performance: 315902 events per second 00:04:37.790 00:04:37.790 real 0m1.440s 00:04:37.790 user 0m1.270s 00:04:37.790 sys 0m0.062s 00:04:37.790 12:37:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.790 12:37:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.790 ************************************ 00:04:37.790 END TEST event_reactor_perf 00:04:37.790 ************************************ 00:04:37.790 12:37:02 event -- event/event.sh@49 -- # uname -s 00:04:37.790 12:37:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:37.790 12:37:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:37.790 12:37:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.790 12:37:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.790 12:37:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.790 ************************************ 00:04:37.790 START TEST event_scheduler 00:04:37.790 ************************************ 00:04:37.790 12:37:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:37.790 * Looking for test storage... 00:04:37.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:37.790 12:37:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.790 12:37:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.790 12:37:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.790 12:37:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.790 12:37:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:37.790 12:37:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.791 --rc genhtml_branch_coverage=1 00:04:37.791 --rc genhtml_function_coverage=1 00:04:37.791 --rc genhtml_legend=1 00:04:37.791 --rc geninfo_all_blocks=1 00:04:37.791 --rc geninfo_unexecuted_blocks=1 00:04:37.791 00:04:37.791 ' 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.791 --rc genhtml_branch_coverage=1 00:04:37.791 --rc genhtml_function_coverage=1 00:04:37.791 --rc genhtml_legend=1 00:04:37.791 --rc geninfo_all_blocks=1 00:04:37.791 --rc geninfo_unexecuted_blocks=1 00:04:37.791 00:04:37.791 ' 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.791 --rc genhtml_branch_coverage=1 00:04:37.791 --rc genhtml_function_coverage=1 00:04:37.791 --rc genhtml_legend=1 00:04:37.791 --rc geninfo_all_blocks=1 00:04:37.791 --rc geninfo_unexecuted_blocks=1 00:04:37.791 00:04:37.791 ' 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.791 --rc genhtml_branch_coverage=1 00:04:37.791 --rc genhtml_function_coverage=1 00:04:37.791 --rc genhtml_legend=1 00:04:37.791 --rc geninfo_all_blocks=1 00:04:37.791 --rc geninfo_unexecuted_blocks=1 00:04:37.791 00:04:37.791 ' 00:04:37.791 12:37:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:37.791 12:37:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58349 00:04:37.791 12:37:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.791 12:37:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58349 00:04:37.791 12:37:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58349 ']' 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.791 12:37:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.791 [2024-11-20 12:37:03.187320] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:37.791 [2024-11-20 12:37:03.187458] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58349 ] 00:04:38.048 [2024-11-20 12:37:03.347530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.048 [2024-11-20 12:37:03.448367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.048 [2024-11-20 12:37:03.448622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.048 [2024-11-20 12:37:03.448876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.048 [2024-11-20 12:37:03.448893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.615 12:37:03 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.615 12:37:03 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:38.615 12:37:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:38.615 12:37:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.615 12:37:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:38.615 POWER: Cannot set governor of lcore 0 to userspace 00:04:38.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:38.615 POWER: Cannot set governor of lcore 0 to performance 00:04:38.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:38.615 POWER: Cannot set governor of lcore 0 to userspace 00:04:38.615 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:38.615 POWER: Cannot set governor of lcore 0 to userspace 00:04:38.615 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:38.615 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:38.615 POWER: Unable to set Power Management Environment for lcore 0 00:04:38.615 [2024-11-20 12:37:03.982179] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:38.615 [2024-11-20 12:37:03.982200] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:38.615 [2024-11-20 12:37:03.982209] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:38.615 [2024-11-20 12:37:03.982225] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:38.615 [2024-11-20 12:37:03.982233] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:38.615 [2024-11-20 12:37:03.982242] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:38.615 12:37:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.616 12:37:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:38.616 12:37:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.616 12:37:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 [2024-11-20 12:37:04.201238] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:38.875 12:37:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.875 12:37:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:38.875 12:37:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.875 12:37:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.875 12:37:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 ************************************ 00:04:38.875 START TEST scheduler_create_thread 00:04:38.875 ************************************ 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 2 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 3 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 4 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 5 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 6 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.875 7 00:04:38.875 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.876 8 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.876 9 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.876 10 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.876 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.447 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.447 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:39.447 12:37:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:39.447 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.447 12:37:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.907 ************************************ 00:04:40.907 END TEST scheduler_create_thread 00:04:40.907 ************************************ 00:04:40.907 12:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.907 00:04:40.907 real 0m1.756s 00:04:40.907 user 0m0.014s 00:04:40.907 sys 0m0.006s 00:04:40.907 12:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.907 12:37:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.907 12:37:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:40.907 12:37:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58349 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58349 ']' 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58349 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58349 00:04:40.907 killing process with pid 58349 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58349' 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58349 00:04:40.907 12:37:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58349 00:04:41.166 [2024-11-20 12:37:06.450713] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:41.731 00:04:41.731 real 0m4.049s 00:04:41.731 user 0m6.557s 00:04:41.731 sys 0m0.357s 00:04:41.731 12:37:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.731 12:37:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.731 ************************************ 00:04:41.731 END TEST event_scheduler 00:04:41.731 ************************************ 00:04:41.731 12:37:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:41.731 12:37:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:41.731 12:37:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.731 12:37:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.731 12:37:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.731 ************************************ 00:04:41.731 START TEST app_repeat 00:04:41.731 ************************************ 00:04:41.731 12:37:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:41.731 Process app_repeat pid: 58443 00:04:41.731 spdk_app_start Round 0 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58443 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58443' 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:04:41.731 12:37:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:04:41.731 12:37:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.731 12:37:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:41.731 12:37:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.731 12:37:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.731 12:37:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.731 12:37:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.731 [2024-11-20 12:37:07.112921] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:41.731 [2024-11-20 12:37:07.113044] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58443 ] 00:04:41.989 [2024-11-20 12:37:07.274736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.989 [2024-11-20 12:37:07.370391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.989 [2024-11-20 12:37:07.370448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.556 12:37:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.556 12:37:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:42.556 12:37:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.813 Malloc0 00:04:42.813 12:37:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.071 Malloc1 00:04:43.071 12:37:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.071 12:37:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.329 /dev/nbd0 00:04:43.329 12:37:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.329 12:37:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.329 1+0 records in 00:04:43.329 1+0 records out 00:04:43.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516696 s, 7.9 MB/s 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.329 12:37:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.329 12:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.329 12:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.329 12:37:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.590 /dev/nbd1 00:04:43.590 12:37:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.590 12:37:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.590 1+0 records in 00:04:43.590 1+0 records out 00:04:43.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287233 s, 14.3 MB/s 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.590 12:37:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.590 12:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.590 12:37:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.590 12:37:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.590 12:37:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.590 12:37:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.849 { 00:04:43.849 "nbd_device": "/dev/nbd0", 00:04:43.849 "bdev_name": "Malloc0" 00:04:43.849 }, 00:04:43.849 { 00:04:43.849 "nbd_device": "/dev/nbd1", 00:04:43.849 "bdev_name": "Malloc1" 00:04:43.849 } 00:04:43.849 ]' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.849 { 00:04:43.849 "nbd_device": "/dev/nbd0", 00:04:43.849 "bdev_name": "Malloc0" 00:04:43.849 }, 00:04:43.849 { 00:04:43.849 "nbd_device": "/dev/nbd1", 00:04:43.849 "bdev_name": "Malloc1" 00:04:43.849 } 00:04:43.849 ]' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.849 /dev/nbd1' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.849 /dev/nbd1' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.849 256+0 records in 00:04:43.849 256+0 records out 00:04:43.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042443 s, 247 MB/s 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.849 256+0 records in 00:04:43.849 256+0 records out 00:04:43.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233206 s, 45.0 MB/s 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.849 256+0 records in 00:04:43.849 256+0 records out 00:04:43.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206275 s, 50.8 MB/s 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.849 12:37:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.850 12:37:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:43.850 12:37:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.850 12:37:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.107 12:37:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.365 12:37:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.622 12:37:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.622 12:37:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.880 12:37:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:45.813 [2024-11-20 12:37:11.031289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.813 [2024-11-20 12:37:11.129864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.813 [2024-11-20 12:37:11.130115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.813 [2024-11-20 12:37:11.245931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.813 [2024-11-20 12:37:11.246030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.340 spdk_app_start Round 1 00:04:48.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.340 12:37:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.340 12:37:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:48.340 12:37:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.340 12:37:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:48.340 12:37:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.340 Malloc0 00:04:48.340 12:37:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.599 Malloc1 00:04:48.599 12:37:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.599 12:37:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.858 /dev/nbd0 00:04:48.858 12:37:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.858 12:37:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.858 1+0 records in 00:04:48.858 1+0 records out 00:04:48.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160226 s, 25.6 MB/s 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.858 12:37:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.858 12:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.858 12:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.858 12:37:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.117 /dev/nbd1 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.117 1+0 records in 00:04:49.117 1+0 records out 00:04:49.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302637 s, 13.5 MB/s 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.117 12:37:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.117 { 00:04:49.117 "nbd_device": "/dev/nbd0", 00:04:49.117 "bdev_name": "Malloc0" 00:04:49.117 }, 00:04:49.117 { 00:04:49.117 "nbd_device": "/dev/nbd1", 00:04:49.117 "bdev_name": "Malloc1" 00:04:49.117 } 00:04:49.117 ]' 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.117 { 00:04:49.117 "nbd_device": "/dev/nbd0", 00:04:49.117 "bdev_name": "Malloc0" 00:04:49.117 }, 00:04:49.117 { 00:04:49.117 "nbd_device": "/dev/nbd1", 00:04:49.117 "bdev_name": "Malloc1" 00:04:49.117 } 00:04:49.117 ]' 00:04:49.117 12:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.375 /dev/nbd1' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.375 /dev/nbd1' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.375 256+0 records in 00:04:49.375 256+0 records out 00:04:49.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00336565 s, 312 MB/s 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.375 256+0 records in 00:04:49.375 256+0 records out 00:04:49.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213623 s, 49.1 MB/s 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.375 256+0 records in 00:04:49.375 256+0 records out 00:04:49.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184469 s, 56.8 MB/s 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.375 12:37:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.632 12:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.632 12:37:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.633 12:37:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.890 12:37:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.890 12:37:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.890 12:37:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.890 12:37:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.890 12:37:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.890 12:37:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.891 12:37:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.891 12:37:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.456 12:37:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.026 [2024-11-20 12:37:16.251980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.026 [2024-11-20 12:37:16.333907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.026 [2024-11-20 12:37:16.333994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.026 [2024-11-20 12:37:16.441431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.027 [2024-11-20 12:37:16.441501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.570 spdk_app_start Round 2 00:04:53.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.570 12:37:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.570 12:37:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:53.570 12:37:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.570 12:37:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.570 12:37:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.839 Malloc0 00:04:53.839 12:37:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.101 Malloc1 00:04:54.101 12:37:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.101 /dev/nbd0 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.101 12:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.101 1+0 records in 00:04:54.101 1+0 records out 00:04:54.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568892 s, 7.2 MB/s 00:04:54.101 12:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.362 /dev/nbd1 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.362 1+0 records in 00:04:54.362 1+0 records out 00:04:54.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240046 s, 17.1 MB/s 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.362 12:37:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.362 12:37:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.621 { 00:04:54.621 "nbd_device": "/dev/nbd0", 00:04:54.621 "bdev_name": "Malloc0" 00:04:54.621 }, 00:04:54.621 { 00:04:54.621 "nbd_device": "/dev/nbd1", 00:04:54.621 "bdev_name": "Malloc1" 00:04:54.621 } 00:04:54.621 ]' 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.621 { 00:04:54.621 "nbd_device": "/dev/nbd0", 00:04:54.621 "bdev_name": "Malloc0" 00:04:54.621 }, 00:04:54.621 { 00:04:54.621 "nbd_device": "/dev/nbd1", 00:04:54.621 "bdev_name": "Malloc1" 00:04:54.621 } 00:04:54.621 ]' 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.621 /dev/nbd1' 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.621 /dev/nbd1' 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.621 256+0 records in 00:04:54.621 256+0 records out 00:04:54.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00688456 s, 152 MB/s 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.621 12:37:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.880 256+0 records in 00:04:54.880 256+0 records out 00:04:54.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207791 s, 50.5 MB/s 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.880 256+0 records in 00:04:54.880 256+0 records out 00:04:54.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225748 s, 46.4 MB/s 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.880 12:37:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.138 12:37:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.397 12:37:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.397 12:37:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.969 12:37:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.538 [2024-11-20 12:37:21.918515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.538 [2024-11-20 12:37:22.021067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.538 [2024-11-20 12:37:22.021185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.797 [2024-11-20 12:37:22.148112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.797 [2024-11-20 12:37:22.148203] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.703 12:37:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58443 /var/tmp/spdk-nbd.sock 00:04:58.703 12:37:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58443 ']' 00:04:58.703 12:37:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.703 12:37:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.703 12:37:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.703 12:37:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.703 12:37:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:58.961 12:37:24 event.app_repeat -- event/event.sh@39 -- # killprocess 58443 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58443 ']' 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58443 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58443 00:04:58.961 killing process with pid 58443 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58443' 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58443 00:04:58.961 12:37:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58443 00:04:59.898 spdk_app_start is called in Round 0. 00:04:59.898 Shutdown signal received, stop current app iteration 00:04:59.898 Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 reinitialization... 00:04:59.898 spdk_app_start is called in Round 1. 00:04:59.898 Shutdown signal received, stop current app iteration 00:04:59.898 Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 reinitialization... 00:04:59.898 spdk_app_start is called in Round 2. 00:04:59.898 Shutdown signal received, stop current app iteration 00:04:59.898 Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 reinitialization... 00:04:59.898 spdk_app_start is called in Round 3. 00:04:59.898 Shutdown signal received, stop current app iteration 00:04:59.898 12:37:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.898 12:37:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.898 00:04:59.898 real 0m18.046s 00:04:59.898 user 0m39.426s 00:04:59.898 sys 0m2.102s 00:04:59.898 12:37:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.898 ************************************ 00:04:59.898 END TEST app_repeat 00:04:59.898 12:37:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.898 ************************************ 00:04:59.898 12:37:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.898 12:37:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.898 12:37:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.898 12:37:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.898 12:37:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.898 ************************************ 00:04:59.898 START TEST cpu_locks 00:04:59.898 ************************************ 00:04:59.898 12:37:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.898 * Looking for test storage... 00:04:59.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.898 12:37:25 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.898 12:37:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.898 12:37:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.898 12:37:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.898 12:37:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.899 12:37:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.899 --rc genhtml_branch_coverage=1 00:04:59.899 --rc genhtml_function_coverage=1 00:04:59.899 --rc genhtml_legend=1 00:04:59.899 --rc geninfo_all_blocks=1 00:04:59.899 --rc geninfo_unexecuted_blocks=1 00:04:59.899 00:04:59.899 ' 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.899 --rc genhtml_branch_coverage=1 00:04:59.899 --rc genhtml_function_coverage=1 00:04:59.899 --rc genhtml_legend=1 00:04:59.899 --rc geninfo_all_blocks=1 00:04:59.899 --rc geninfo_unexecuted_blocks=1 00:04:59.899 00:04:59.899 ' 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.899 --rc genhtml_branch_coverage=1 00:04:59.899 --rc genhtml_function_coverage=1 00:04:59.899 --rc genhtml_legend=1 00:04:59.899 --rc geninfo_all_blocks=1 00:04:59.899 --rc geninfo_unexecuted_blocks=1 00:04:59.899 00:04:59.899 ' 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.899 --rc genhtml_branch_coverage=1 00:04:59.899 --rc genhtml_function_coverage=1 00:04:59.899 --rc genhtml_legend=1 00:04:59.899 --rc geninfo_all_blocks=1 00:04:59.899 --rc geninfo_unexecuted_blocks=1 00:04:59.899 00:04:59.899 ' 00:04:59.899 12:37:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.899 12:37:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.899 12:37:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.899 12:37:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.899 12:37:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.899 ************************************ 00:04:59.899 START TEST default_locks 00:04:59.899 ************************************ 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:59.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58869 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58869 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58869 ']' 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.899 12:37:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.899 [2024-11-20 12:37:25.408492] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:04:59.899 [2024-11-20 12:37:25.408606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58869 ] 00:05:00.159 [2024-11-20 12:37:25.564360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.419 [2024-11-20 12:37:25.692874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.992 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.992 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:00.992 12:37:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58869 00:05:00.992 12:37:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58869 00:05:00.992 12:37:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58869 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58869 ']' 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58869 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58869 00:05:01.250 killing process with pid 58869 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58869' 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58869 00:05:01.250 12:37:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58869 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58869 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58869 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58869 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58869 ']' 00:05:02.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.626 ERROR: process (pid: 58869) is no longer running 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.626 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58869) - No such process 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.626 ************************************ 00:05:02.626 END TEST default_locks 00:05:02.626 ************************************ 00:05:02.626 00:05:02.626 real 0m2.753s 00:05:02.626 user 0m2.694s 00:05:02.626 sys 0m0.505s 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.626 12:37:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.887 12:37:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:02.887 12:37:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.887 12:37:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.887 12:37:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.887 ************************************ 00:05:02.887 START TEST default_locks_via_rpc 00:05:02.887 ************************************ 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58933 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58933 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58933 ']' 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.887 12:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.888 12:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.888 12:37:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.888 [2024-11-20 12:37:28.242483] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:02.888 [2024-11-20 12:37:28.242707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:05:03.147 [2024-11-20 12:37:28.406505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.147 [2024-11-20 12:37:28.509622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.718 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58933 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.719 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58933 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58933 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58933 ']' 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58933 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58933 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.979 killing process with pid 58933 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58933' 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58933 00:05:03.979 12:37:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58933 00:05:05.896 00:05:05.896 real 0m2.762s 00:05:05.896 user 0m2.727s 00:05:05.896 sys 0m0.480s 00:05:05.896 12:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.896 ************************************ 00:05:05.896 END TEST default_locks_via_rpc 00:05:05.896 ************************************ 00:05:05.896 12:37:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.896 12:37:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:05.896 12:37:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.896 12:37:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.896 12:37:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.896 ************************************ 00:05:05.896 START TEST non_locking_app_on_locked_coremask 00:05:05.896 ************************************ 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58996 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58996 /var/tmp/spdk.sock 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58996 ']' 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.896 12:37:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.896 [2024-11-20 12:37:31.069888] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:05.896 [2024-11-20 12:37:31.070011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58996 ] 00:05:05.896 [2024-11-20 12:37:31.234383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.896 [2024-11-20 12:37:31.369215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59012 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59012 /var/tmp/spdk2.sock 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59012 ']' 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.837 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.838 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.838 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.838 12:37:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.838 [2024-11-20 12:37:32.173265] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:06.838 [2024-11-20 12:37:32.173425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:05:07.098 [2024-11-20 12:37:32.353707] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.098 [2024-11-20 12:37:32.353786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.098 [2024-11-20 12:37:32.604159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.641 12:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.641 12:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.641 12:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58996 00:05:09.641 12:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58996 00:05:09.641 12:37:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58996 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58996 ']' 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58996 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58996 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.641 killing process with pid 58996 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58996' 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58996 00:05:09.641 12:37:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58996 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59012 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59012 ']' 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59012 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59012 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.922 killing process with pid 59012 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59012' 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59012 00:05:12.922 12:37:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59012 00:05:14.298 00:05:14.298 real 0m8.766s 00:05:14.298 user 0m9.010s 00:05:14.298 sys 0m1.129s 00:05:14.298 12:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.298 ************************************ 00:05:14.298 END TEST non_locking_app_on_locked_coremask 00:05:14.298 ************************************ 00:05:14.298 12:37:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.299 12:37:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:14.299 12:37:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.299 12:37:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.299 12:37:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.299 ************************************ 00:05:14.299 START TEST locking_app_on_unlocked_coremask 00:05:14.299 ************************************ 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59133 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59133 /var/tmp/spdk.sock 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59133 ']' 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.299 12:37:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.556 [2024-11-20 12:37:39.891642] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:14.556 [2024-11-20 12:37:39.891789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59133 ] 00:05:14.556 [2024-11-20 12:37:40.052035] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.556 [2024-11-20 12:37:40.052081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.815 [2024-11-20 12:37:40.152961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59143 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59143 /var/tmp/spdk2.sock 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59143 ']' 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.381 12:37:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.381 [2024-11-20 12:37:40.822708] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:15.381 [2024-11-20 12:37:40.822834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:05:15.639 [2024-11-20 12:37:40.996996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.897 [2024-11-20 12:37:41.203016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59143 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59143 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59133 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59133 ']' 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59133 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59133 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.271 killing process with pid 59133 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59133' 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59133 00:05:17.271 12:37:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59133 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59143 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59143 ']' 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59143 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59143 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.556 killing process with pid 59143 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59143' 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59143 00:05:20.556 12:37:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59143 00:05:21.931 00:05:21.931 real 0m7.424s 00:05:21.931 user 0m7.645s 00:05:21.931 sys 0m0.862s 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.931 ************************************ 00:05:21.931 END TEST locking_app_on_unlocked_coremask 00:05:21.931 ************************************ 00:05:21.931 12:37:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.931 12:37:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.931 12:37:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.931 12:37:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.931 ************************************ 00:05:21.931 START TEST locking_app_on_locked_coremask 00:05:21.931 ************************************ 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59251 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59251 /var/tmp/spdk.sock 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59251 ']' 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.931 12:37:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.931 [2024-11-20 12:37:47.373548] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:21.931 [2024-11-20 12:37:47.373680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59251 ] 00:05:22.189 [2024-11-20 12:37:47.538092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.189 [2024-11-20 12:37:47.638036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59267 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59267 /var/tmp/spdk2.sock 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59267 /var/tmp/spdk2.sock 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59267 /var/tmp/spdk2.sock 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59267 ']' 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.755 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.013 [2024-11-20 12:37:48.309441] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:23.013 [2024-11-20 12:37:48.309560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:05:23.013 [2024-11-20 12:37:48.483947] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59251 has claimed it. 00:05:23.013 [2024-11-20 12:37:48.484006] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.578 ERROR: process (pid: 59267) is no longer running 00:05:23.578 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59267) - No such process 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59251 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59251 00:05:23.578 12:37:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.842 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59251 00:05:23.842 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59251 ']' 00:05:23.842 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59251 00:05:23.842 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:23.842 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.843 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59251 00:05:23.843 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.843 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.843 killing process with pid 59251 00:05:23.843 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59251' 00:05:23.843 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59251 00:05:23.843 12:37:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59251 00:05:25.216 00:05:25.216 real 0m3.328s 00:05:25.216 user 0m3.508s 00:05:25.216 sys 0m0.555s 00:05:25.216 12:37:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.216 ************************************ 00:05:25.216 END TEST locking_app_on_locked_coremask 00:05:25.216 ************************************ 00:05:25.216 12:37:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.216 12:37:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.216 12:37:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.216 12:37:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.216 12:37:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.216 ************************************ 00:05:25.216 START TEST locking_overlapped_coremask 00:05:25.216 ************************************ 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59325 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59325 /var/tmp/spdk.sock 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59325 ']' 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.216 12:37:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.474 [2024-11-20 12:37:50.748659] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:25.475 [2024-11-20 12:37:50.748809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:05:25.475 [2024-11-20 12:37:50.911901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.733 [2024-11-20 12:37:51.017635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.733 [2024-11-20 12:37:51.018094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.733 [2024-11-20 12:37:51.018293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59343 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59343 /var/tmp/spdk2.sock 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59343 /var/tmp/spdk2.sock 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:26.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59343 /var/tmp/spdk2.sock 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59343 ']' 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.297 12:37:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.297 [2024-11-20 12:37:51.699366] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:26.297 [2024-11-20 12:37:51.699492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59343 ] 00:05:26.554 [2024-11-20 12:37:51.873321] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59325 has claimed it. 00:05:26.554 [2024-11-20 12:37:51.873388] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.118 ERROR: process (pid: 59343) is no longer running 00:05:27.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59343) - No such process 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59325 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59325 ']' 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59325 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59325 00:05:27.118 killing process with pid 59325 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59325' 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59325 00:05:27.118 12:37:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59325 00:05:28.525 00:05:28.525 real 0m3.221s 00:05:28.525 user 0m8.683s 00:05:28.525 sys 0m0.460s 00:05:28.525 12:37:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.525 ************************************ 00:05:28.525 12:37:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.525 END TEST locking_overlapped_coremask 00:05:28.525 ************************************ 00:05:28.525 12:37:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.525 12:37:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.525 12:37:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.525 12:37:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.525 ************************************ 00:05:28.526 START TEST locking_overlapped_coremask_via_rpc 00:05:28.526 ************************************ 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59396 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59396 /var/tmp/spdk.sock 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.526 12:37:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.526 [2024-11-20 12:37:54.007452] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:28.526 [2024-11-20 12:37:54.007570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:05:28.784 [2024-11-20 12:37:54.167931] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.784 [2024-11-20 12:37:54.167993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.784 [2024-11-20 12:37:54.272033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.784 [2024-11-20 12:37:54.272212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.784 [2024-11-20 12:37:54.272392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59414 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59414 /var/tmp/spdk2.sock 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59414 ']' 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.716 12:37:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.716 [2024-11-20 12:37:54.963321] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:29.716 [2024-11-20 12:37:54.963441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59414 ] 00:05:29.716 [2024-11-20 12:37:55.144067] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.716 [2024-11-20 12:37:55.144139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.975 [2024-11-20 12:37:55.387667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.975 [2024-11-20 12:37:55.390832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.975 [2024-11-20 12:37:55.390852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.349 [2024-11-20 12:37:56.685936] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59396 has claimed it. 00:05:31.349 request: 00:05:31.349 { 00:05:31.349 "method": "framework_enable_cpumask_locks", 00:05:31.349 "req_id": 1 00:05:31.349 } 00:05:31.349 Got JSON-RPC error response 00:05:31.349 response: 00:05:31.349 { 00:05:31.349 "code": -32603, 00:05:31.349 "message": "Failed to claim CPU core: 2" 00:05:31.349 } 00:05:31.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59396 /var/tmp/spdk.sock 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.349 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59414 /var/tmp/spdk2.sock 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59414 ']' 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.607 12:37:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.607 ************************************ 00:05:31.607 END TEST locking_overlapped_coremask_via_rpc 00:05:31.607 ************************************ 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.607 00:05:31.607 real 0m3.181s 00:05:31.607 user 0m1.104s 00:05:31.607 sys 0m0.155s 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.607 12:37:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 12:37:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.865 12:37:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59396 ]] 00:05:31.865 12:37:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59396 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59396 ']' 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59396 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59396 00:05:31.865 killing process with pid 59396 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59396' 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59396 00:05:31.865 12:37:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59396 00:05:33.239 12:37:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59414 ]] 00:05:33.239 12:37:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59414 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59414 ']' 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59414 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59414 00:05:33.239 killing process with pid 59414 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59414' 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59414 00:05:33.239 12:37:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59414 00:05:34.612 12:38:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.612 Process with pid 59396 is not found 00:05:34.612 12:38:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:34.612 12:38:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59396 ]] 00:05:34.612 12:38:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59396 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59396 ']' 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59396 00:05:34.612 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59396) - No such process 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59396 is not found' 00:05:34.612 12:38:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59414 ]] 00:05:34.612 12:38:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59414 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59414 ']' 00:05:34.612 Process with pid 59414 is not found 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59414 00:05:34.612 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59414) - No such process 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59414 is not found' 00:05:34.612 12:38:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.612 ************************************ 00:05:34.612 END TEST cpu_locks 00:05:34.612 ************************************ 00:05:34.612 00:05:34.612 real 0m34.908s 00:05:34.612 user 0m58.630s 00:05:34.612 sys 0m5.048s 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.612 12:38:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.870 ************************************ 00:05:34.870 END TEST event 00:05:34.870 ************************************ 00:05:34.870 00:05:34.870 real 1m1.791s 00:05:34.870 user 1m51.593s 00:05:34.870 sys 0m7.942s 00:05:34.870 12:38:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.870 12:38:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.870 12:38:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.870 12:38:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.870 12:38:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.870 12:38:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.870 ************************************ 00:05:34.870 START TEST thread 00:05:34.870 ************************************ 00:05:34.870 12:38:00 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.870 * Looking for test storage... 00:05:34.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:34.870 12:38:00 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.870 12:38:00 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.870 12:38:00 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.870 12:38:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.870 12:38:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.870 12:38:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.870 12:38:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.870 12:38:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.870 12:38:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.870 12:38:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.870 12:38:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.870 12:38:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.870 12:38:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.870 12:38:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.870 12:38:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.870 12:38:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:34.870 12:38:00 thread -- scripts/common.sh@345 -- # : 1 00:05:34.870 12:38:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.870 12:38:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.870 12:38:00 thread -- scripts/common.sh@365 -- # decimal 1 00:05:34.870 12:38:00 thread -- scripts/common.sh@353 -- # local d=1 00:05:34.870 12:38:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.870 12:38:00 thread -- scripts/common.sh@355 -- # echo 1 00:05:34.870 12:38:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.870 12:38:00 thread -- scripts/common.sh@366 -- # decimal 2 00:05:34.870 12:38:00 thread -- scripts/common.sh@353 -- # local d=2 00:05:34.870 12:38:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.870 12:38:00 thread -- scripts/common.sh@355 -- # echo 2 00:05:34.870 12:38:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.870 12:38:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.871 12:38:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.871 12:38:00 thread -- scripts/common.sh@368 -- # return 0 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.871 --rc genhtml_branch_coverage=1 00:05:34.871 --rc genhtml_function_coverage=1 00:05:34.871 --rc genhtml_legend=1 00:05:34.871 --rc geninfo_all_blocks=1 00:05:34.871 --rc geninfo_unexecuted_blocks=1 00:05:34.871 00:05:34.871 ' 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.871 --rc genhtml_branch_coverage=1 00:05:34.871 --rc genhtml_function_coverage=1 00:05:34.871 --rc genhtml_legend=1 00:05:34.871 --rc geninfo_all_blocks=1 00:05:34.871 --rc geninfo_unexecuted_blocks=1 00:05:34.871 00:05:34.871 ' 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.871 --rc genhtml_branch_coverage=1 00:05:34.871 --rc genhtml_function_coverage=1 00:05:34.871 --rc genhtml_legend=1 00:05:34.871 --rc geninfo_all_blocks=1 00:05:34.871 --rc geninfo_unexecuted_blocks=1 00:05:34.871 00:05:34.871 ' 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.871 --rc genhtml_branch_coverage=1 00:05:34.871 --rc genhtml_function_coverage=1 00:05:34.871 --rc genhtml_legend=1 00:05:34.871 --rc geninfo_all_blocks=1 00:05:34.871 --rc geninfo_unexecuted_blocks=1 00:05:34.871 00:05:34.871 ' 00:05:34.871 12:38:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.871 12:38:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.871 ************************************ 00:05:34.871 START TEST thread_poller_perf 00:05:34.871 ************************************ 00:05:34.871 12:38:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.871 [2024-11-20 12:38:00.372180] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:34.871 [2024-11-20 12:38:00.372309] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:05:35.129 [2024-11-20 12:38:00.534499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.129 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.129 [2024-11-20 12:38:00.632057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.527 [2024-11-20T12:38:02.046Z] ====================================== 00:05:36.527 [2024-11-20T12:38:02.046Z] busy:2612463674 (cyc) 00:05:36.527 [2024-11-20T12:38:02.046Z] total_run_count: 306000 00:05:36.527 [2024-11-20T12:38:02.046Z] tsc_hz: 2600000000 (cyc) 00:05:36.527 [2024-11-20T12:38:02.046Z] ====================================== 00:05:36.527 [2024-11-20T12:38:02.046Z] poller_cost: 8537 (cyc), 3283 (nsec) 00:05:36.527 00:05:36.527 real 0m1.459s 00:05:36.527 user 0m1.277s 00:05:36.527 sys 0m0.074s 00:05:36.527 12:38:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.527 12:38:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.527 ************************************ 00:05:36.527 END TEST thread_poller_perf 00:05:36.527 ************************************ 00:05:36.527 12:38:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.527 12:38:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:36.528 12:38:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.528 12:38:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.528 ************************************ 00:05:36.528 START TEST thread_poller_perf 00:05:36.528 ************************************ 00:05:36.528 12:38:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.528 [2024-11-20 12:38:01.878715] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:36.528 [2024-11-20 12:38:01.878854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:05:36.528 [2024-11-20 12:38:02.041651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.786 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.786 [2024-11-20 12:38:02.141225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.165 [2024-11-20T12:38:03.684Z] ====================================== 00:05:38.165 [2024-11-20T12:38:03.684Z] busy:2603127734 (cyc) 00:05:38.165 [2024-11-20T12:38:03.684Z] total_run_count: 3932000 00:05:38.165 [2024-11-20T12:38:03.684Z] tsc_hz: 2600000000 (cyc) 00:05:38.165 [2024-11-20T12:38:03.684Z] ====================================== 00:05:38.165 [2024-11-20T12:38:03.684Z] poller_cost: 662 (cyc), 254 (nsec) 00:05:38.165 00:05:38.165 real 0m1.446s 00:05:38.165 user 0m1.268s 00:05:38.165 sys 0m0.071s 00:05:38.165 12:38:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.165 ************************************ 00:05:38.165 END TEST thread_poller_perf 00:05:38.165 ************************************ 00:05:38.165 12:38:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.165 12:38:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:38.165 00:05:38.165 real 0m3.156s 00:05:38.165 user 0m2.666s 00:05:38.165 sys 0m0.260s 00:05:38.165 12:38:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.165 12:38:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.165 ************************************ 00:05:38.165 END TEST thread 00:05:38.165 ************************************ 00:05:38.165 12:38:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:38.165 12:38:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:38.165 12:38:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.165 12:38:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.165 12:38:03 -- common/autotest_common.sh@10 -- # set +x 00:05:38.165 ************************************ 00:05:38.165 START TEST app_cmdline 00:05:38.165 ************************************ 00:05:38.165 12:38:03 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:38.165 * Looking for test storage... 00:05:38.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:38.165 12:38:03 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.165 12:38:03 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.165 12:38:03 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.165 12:38:03 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.165 12:38:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:38.165 12:38:03 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.165 12:38:03 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.165 --rc genhtml_branch_coverage=1 00:05:38.165 --rc genhtml_function_coverage=1 00:05:38.165 --rc genhtml_legend=1 00:05:38.165 --rc geninfo_all_blocks=1 00:05:38.165 --rc geninfo_unexecuted_blocks=1 00:05:38.165 00:05:38.165 ' 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.166 --rc genhtml_branch_coverage=1 00:05:38.166 --rc genhtml_function_coverage=1 00:05:38.166 --rc genhtml_legend=1 00:05:38.166 --rc geninfo_all_blocks=1 00:05:38.166 --rc geninfo_unexecuted_blocks=1 00:05:38.166 00:05:38.166 ' 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.166 --rc genhtml_branch_coverage=1 00:05:38.166 --rc genhtml_function_coverage=1 00:05:38.166 --rc genhtml_legend=1 00:05:38.166 --rc geninfo_all_blocks=1 00:05:38.166 --rc geninfo_unexecuted_blocks=1 00:05:38.166 00:05:38.166 ' 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.166 --rc genhtml_branch_coverage=1 00:05:38.166 --rc genhtml_function_coverage=1 00:05:38.166 --rc genhtml_legend=1 00:05:38.166 --rc geninfo_all_blocks=1 00:05:38.166 --rc geninfo_unexecuted_blocks=1 00:05:38.166 00:05:38.166 ' 00:05:38.166 12:38:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:38.166 12:38:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59700 00:05:38.166 12:38:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59700 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59700 ']' 00:05:38.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.166 12:38:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.166 12:38:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:38.166 [2024-11-20 12:38:03.614327] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:38.166 [2024-11-20 12:38:03.614447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59700 ] 00:05:38.425 [2024-11-20 12:38:03.776699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.425 [2024-11-20 12:38:03.879244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.996 12:38:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.996 12:38:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:38.996 12:38:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:39.255 { 00:05:39.256 "version": "SPDK v25.01-pre git sha1 bc5264bd5", 00:05:39.256 "fields": { 00:05:39.256 "major": 25, 00:05:39.256 "minor": 1, 00:05:39.256 "patch": 0, 00:05:39.256 "suffix": "-pre", 00:05:39.256 "commit": "bc5264bd5" 00:05:39.256 } 00:05:39.256 } 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:39.256 12:38:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:39.256 12:38:04 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.516 request: 00:05:39.516 { 00:05:39.516 "method": "env_dpdk_get_mem_stats", 00:05:39.516 "req_id": 1 00:05:39.516 } 00:05:39.516 Got JSON-RPC error response 00:05:39.516 response: 00:05:39.516 { 00:05:39.516 "code": -32601, 00:05:39.516 "message": "Method not found" 00:05:39.516 } 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.516 12:38:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59700 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59700 ']' 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59700 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59700 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59700' 00:05:39.516 killing process with pid 59700 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 59700 00:05:39.516 12:38:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 59700 00:05:40.898 00:05:40.898 real 0m3.006s 00:05:40.898 user 0m3.244s 00:05:40.898 sys 0m0.437s 00:05:40.898 12:38:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.898 12:38:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.898 ************************************ 00:05:40.898 END TEST app_cmdline 00:05:40.898 ************************************ 00:05:41.159 12:38:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:41.159 12:38:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.159 12:38:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.159 12:38:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.159 ************************************ 00:05:41.159 START TEST version 00:05:41.159 ************************************ 00:05:41.159 12:38:06 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:41.159 * Looking for test storage... 00:05:41.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:41.159 12:38:06 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.159 12:38:06 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.159 12:38:06 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.159 12:38:06 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.159 12:38:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.159 12:38:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.159 12:38:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.159 12:38:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.159 12:38:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.159 12:38:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.159 12:38:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.159 12:38:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.159 12:38:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.159 12:38:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.159 12:38:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.159 12:38:06 version -- scripts/common.sh@344 -- # case "$op" in 00:05:41.159 12:38:06 version -- scripts/common.sh@345 -- # : 1 00:05:41.159 12:38:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.159 12:38:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.159 12:38:06 version -- scripts/common.sh@365 -- # decimal 1 00:05:41.159 12:38:06 version -- scripts/common.sh@353 -- # local d=1 00:05:41.159 12:38:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.159 12:38:06 version -- scripts/common.sh@355 -- # echo 1 00:05:41.159 12:38:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.159 12:38:06 version -- scripts/common.sh@366 -- # decimal 2 00:05:41.159 12:38:06 version -- scripts/common.sh@353 -- # local d=2 00:05:41.160 12:38:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.160 12:38:06 version -- scripts/common.sh@355 -- # echo 2 00:05:41.160 12:38:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.160 12:38:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.160 12:38:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.160 12:38:06 version -- scripts/common.sh@368 -- # return 0 00:05:41.160 12:38:06 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.160 12:38:06 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.160 --rc genhtml_branch_coverage=1 00:05:41.160 --rc genhtml_function_coverage=1 00:05:41.160 --rc genhtml_legend=1 00:05:41.160 --rc geninfo_all_blocks=1 00:05:41.160 --rc geninfo_unexecuted_blocks=1 00:05:41.160 00:05:41.160 ' 00:05:41.160 12:38:06 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.160 --rc genhtml_branch_coverage=1 00:05:41.160 --rc genhtml_function_coverage=1 00:05:41.160 --rc genhtml_legend=1 00:05:41.160 --rc geninfo_all_blocks=1 00:05:41.160 --rc geninfo_unexecuted_blocks=1 00:05:41.160 00:05:41.160 ' 00:05:41.160 12:38:06 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.160 --rc genhtml_branch_coverage=1 00:05:41.160 --rc genhtml_function_coverage=1 00:05:41.160 --rc genhtml_legend=1 00:05:41.160 --rc geninfo_all_blocks=1 00:05:41.160 --rc geninfo_unexecuted_blocks=1 00:05:41.160 00:05:41.160 ' 00:05:41.160 12:38:06 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.160 --rc genhtml_branch_coverage=1 00:05:41.160 --rc genhtml_function_coverage=1 00:05:41.160 --rc genhtml_legend=1 00:05:41.160 --rc geninfo_all_blocks=1 00:05:41.160 --rc geninfo_unexecuted_blocks=1 00:05:41.160 00:05:41.160 ' 00:05:41.160 12:38:06 version -- app/version.sh@17 -- # get_header_version major 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # cut -f2 00:05:41.160 12:38:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.160 12:38:06 version -- app/version.sh@17 -- # major=25 00:05:41.160 12:38:06 version -- app/version.sh@18 -- # get_header_version minor 00:05:41.160 12:38:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # cut -f2 00:05:41.160 12:38:06 version -- app/version.sh@18 -- # minor=1 00:05:41.160 12:38:06 version -- app/version.sh@19 -- # get_header_version patch 00:05:41.160 12:38:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # cut -f2 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.160 12:38:06 version -- app/version.sh@19 -- # patch=0 00:05:41.160 12:38:06 version -- app/version.sh@20 -- # get_header_version suffix 00:05:41.160 12:38:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.160 12:38:06 version -- app/version.sh@14 -- # cut -f2 00:05:41.160 12:38:06 version -- app/version.sh@20 -- # suffix=-pre 00:05:41.160 12:38:06 version -- app/version.sh@22 -- # version=25.1 00:05:41.160 12:38:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:41.160 12:38:06 version -- app/version.sh@28 -- # version=25.1rc0 00:05:41.160 12:38:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:41.160 12:38:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:41.160 12:38:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:41.160 12:38:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:41.160 00:05:41.160 real 0m0.205s 00:05:41.160 user 0m0.134s 00:05:41.160 sys 0m0.100s 00:05:41.160 12:38:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.160 12:38:06 version -- common/autotest_common.sh@10 -- # set +x 00:05:41.160 ************************************ 00:05:41.160 END TEST version 00:05:41.160 ************************************ 00:05:41.422 12:38:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:41.422 12:38:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:41.422 12:38:06 -- spdk/autotest.sh@194 -- # uname -s 00:05:41.422 12:38:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:41.422 12:38:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:41.422 12:38:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:41.422 12:38:06 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:41.422 12:38:06 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:41.422 12:38:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:41.422 12:38:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.422 12:38:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.422 ************************************ 00:05:41.422 START TEST blockdev_nvme 00:05:41.422 ************************************ 00:05:41.422 12:38:06 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:41.422 * Looking for test storage... 00:05:41.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:41.422 12:38:06 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.422 12:38:06 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.422 12:38:06 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.422 12:38:06 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.422 12:38:06 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.422 12:38:06 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.422 12:38:06 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.422 12:38:06 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.422 12:38:06 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.423 12:38:06 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.423 --rc genhtml_branch_coverage=1 00:05:41.423 --rc genhtml_function_coverage=1 00:05:41.423 --rc genhtml_legend=1 00:05:41.423 --rc geninfo_all_blocks=1 00:05:41.423 --rc geninfo_unexecuted_blocks=1 00:05:41.423 00:05:41.423 ' 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.423 --rc genhtml_branch_coverage=1 00:05:41.423 --rc genhtml_function_coverage=1 00:05:41.423 --rc genhtml_legend=1 00:05:41.423 --rc geninfo_all_blocks=1 00:05:41.423 --rc geninfo_unexecuted_blocks=1 00:05:41.423 00:05:41.423 ' 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.423 --rc genhtml_branch_coverage=1 00:05:41.423 --rc genhtml_function_coverage=1 00:05:41.423 --rc genhtml_legend=1 00:05:41.423 --rc geninfo_all_blocks=1 00:05:41.423 --rc geninfo_unexecuted_blocks=1 00:05:41.423 00:05:41.423 ' 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.423 --rc genhtml_branch_coverage=1 00:05:41.423 --rc genhtml_function_coverage=1 00:05:41.423 --rc genhtml_legend=1 00:05:41.423 --rc geninfo_all_blocks=1 00:05:41.423 --rc geninfo_unexecuted_blocks=1 00:05:41.423 00:05:41.423 ' 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:41.423 12:38:06 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59872 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59872 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59872 ']' 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.423 12:38:06 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.423 12:38:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:41.707 [2024-11-20 12:38:06.941398] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:41.708 [2024-11-20 12:38:06.941652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59872 ] 00:05:41.708 [2024-11-20 12:38:07.100051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.708 [2024-11-20 12:38:07.195334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.285 12:38:07 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.285 12:38:07 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:42.285 12:38:07 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:42.285 12:38:07 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:42.285 12:38:07 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:42.285 12:38:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:42.285 12:38:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.544 12:38:07 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:42.544 12:38:07 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.544 12:38:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.802 12:38:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:42.802 12:38:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:42.803 12:38:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c9048474-9808-4279-b9a5-064923df873f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c9048474-9808-4279-b9a5-064923df873f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "863f486a-c4b3-46ae-98cf-787ec9eeb735"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "863f486a-c4b3-46ae-98cf-787ec9eeb735",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "65bcb301-5d67-4915-8d76-a11c8a1cfbbf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65bcb301-5d67-4915-8d76-a11c8a1cfbbf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8b17b7cc-676b-44fe-aa73-dd8c327eaae6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8b17b7cc-676b-44fe-aa73-dd8c327eaae6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0eb20f31-29c4-45d1-97e8-489b51c81e93"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0eb20f31-29c4-45d1-97e8-489b51c81e93",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3ee1e240-464f-4773-ba00-dcf4d265f99b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3ee1e240-464f-4773-ba00-dcf4d265f99b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:42.803 12:38:08 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:42.803 12:38:08 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:42.803 12:38:08 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:42.803 12:38:08 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59872 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59872 ']' 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59872 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59872 00:05:42.803 killing process with pid 59872 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59872' 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59872 00:05:42.803 12:38:08 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59872 00:05:44.718 12:38:09 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:44.718 12:38:09 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:44.718 12:38:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:44.719 12:38:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.719 12:38:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:44.719 ************************************ 00:05:44.719 START TEST bdev_hello_world 00:05:44.719 ************************************ 00:05:44.719 12:38:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:44.719 [2024-11-20 12:38:09.858277] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:44.719 [2024-11-20 12:38:09.858406] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:05:44.719 [2024-11-20 12:38:10.016788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.719 [2024-11-20 12:38:10.117638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.289 [2024-11-20 12:38:10.648756] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:45.289 [2024-11-20 12:38:10.648962] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:45.289 [2024-11-20 12:38:10.648990] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:45.289 [2024-11-20 12:38:10.651492] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:45.289 [2024-11-20 12:38:10.651813] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:45.289 [2024-11-20 12:38:10.651833] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:45.289 [2024-11-20 12:38:10.652017] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:45.289 00:05:45.289 [2024-11-20 12:38:10.652118] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:45.862 00:05:45.862 real 0m1.567s 00:05:45.862 user 0m1.279s 00:05:45.862 sys 0m0.181s 00:05:45.862 12:38:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.862 ************************************ 00:05:45.862 END TEST bdev_hello_world 00:05:45.862 ************************************ 00:05:45.862 12:38:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:46.122 12:38:11 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:46.122 12:38:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.122 12:38:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.122 12:38:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.122 ************************************ 00:05:46.122 START TEST bdev_bounds 00:05:46.122 ************************************ 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59992 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:46.122 Process bdevio pid: 59992 00:05:46.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59992' 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59992 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59992 ']' 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.122 12:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.123 12:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.123 12:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.123 12:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:46.123 [2024-11-20 12:38:11.463411] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:46.123 [2024-11-20 12:38:11.463854] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59992 ] 00:05:46.123 [2024-11-20 12:38:11.620767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.384 [2024-11-20 12:38:11.721996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.384 [2024-11-20 12:38:11.722215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.384 [2024-11-20 12:38:11.722289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.956 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.956 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:46.956 12:38:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:46.956 I/O targets: 00:05:46.956 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:46.956 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:46.956 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:46.956 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:46.956 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:46.956 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:46.956 00:05:46.956 00:05:46.956 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.956 http://cunit.sourceforge.net/ 00:05:46.956 00:05:46.956 00:05:46.956 Suite: bdevio tests on: Nvme3n1 00:05:46.956 Test: blockdev write read block ...passed 00:05:46.956 Test: blockdev write zeroes read block ...passed 00:05:46.956 Test: blockdev write zeroes read no split ...passed 00:05:46.956 Test: blockdev write zeroes read split ...passed 00:05:46.956 Test: blockdev write zeroes read split partial ...passed 00:05:46.956 Test: blockdev reset ...[2024-11-20 12:38:12.465471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:46.956 [2024-11-20 12:38:12.470560] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:46.956 passed 00:05:47.217 Test: blockdev write read 8 blocks ...passed 00:05:47.217 Test: blockdev write read size > 128k ...passed 00:05:47.217 Test: blockdev write read invalid size ...passed 00:05:47.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:47.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:47.217 Test: blockdev write read max offset ...passed 00:05:47.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:47.217 Test: blockdev writev readv 8 blocks ...passed 00:05:47.217 Test: blockdev writev readv 30 x 1block ...passed 00:05:47.217 Test: blockdev writev readv block ...passed 00:05:47.217 Test: blockdev writev readv size > 128k ...passed 00:05:47.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:47.217 Test: blockdev comparev and writev ...[2024-11-20 12:38:12.487417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b460a000 len:0x1000 00:05:47.217 [2024-11-20 12:38:12.487487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:47.217 passed 00:05:47.217 Test: blockdev nvme passthru rw ...passed 00:05:47.217 Test: blockdev nvme passthru vendor specific ...passed 00:05:47.217 Test: blockdev nvme admin passthru ...[2024-11-20 12:38:12.489458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:47.217 [2024-11-20 12:38:12.489505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:47.217 passed 00:05:47.217 Test: blockdev copy ...passed 00:05:47.217 Suite: bdevio tests on: Nvme2n3 00:05:47.217 Test: blockdev write read block ...passed 00:05:47.217 Test: blockdev write zeroes read block ...passed 00:05:47.217 Test: blockdev write zeroes read no split ...passed 00:05:47.217 Test: blockdev write zeroes read split ...passed 00:05:47.217 Test: blockdev write zeroes read split partial ...passed 00:05:47.217 Test: blockdev reset ...[2024-11-20 12:38:12.550242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:47.217 [2024-11-20 12:38:12.556092] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:47.217 passed 00:05:47.217 Test: blockdev write read 8 blocks ...passed 00:05:47.217 Test: blockdev write read size > 128k ...passed 00:05:47.217 Test: blockdev write read invalid size ...passed 00:05:47.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:47.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:47.217 Test: blockdev write read max offset ...passed 00:05:47.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:47.217 Test: blockdev writev readv 8 blocks ...passed 00:05:47.217 Test: blockdev writev readv 30 x 1block ...passed 00:05:47.217 Test: blockdev writev readv block ...passed 00:05:47.217 Test: blockdev writev readv size > 128k ...passed 00:05:47.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:47.217 Test: blockdev comparev and writev ...[2024-11-20 12:38:12.571851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a06000 len:0x1000 00:05:47.217 [2024-11-20 12:38:12.571903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:47.217 passed 00:05:47.217 Test: blockdev nvme passthru rw ...passed 00:05:47.217 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:38:12.573612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:47.217 [2024-11-20 12:38:12.573647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:47.217 passed 00:05:47.217 Test: blockdev nvme admin passthru ...passed 00:05:47.217 Test: blockdev copy ...passed 00:05:47.217 Suite: bdevio tests on: Nvme2n2 00:05:47.217 Test: blockdev write read block ...passed 00:05:47.217 Test: blockdev write zeroes read block ...passed 00:05:47.217 Test: blockdev write zeroes read no split ...passed 00:05:47.217 Test: blockdev write zeroes read split ...passed 00:05:47.217 Test: blockdev write zeroes read split partial ...passed 00:05:47.217 Test: blockdev reset ...[2024-11-20 12:38:12.635005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:47.217 [2024-11-20 12:38:12.639545] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:47.217 passed 00:05:47.217 Test: blockdev write read 8 blocks ...passed 00:05:47.217 Test: blockdev write read size > 128k ...passed 00:05:47.217 Test: blockdev write read invalid size ...passed 00:05:47.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:47.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:47.217 Test: blockdev write read max offset ...passed 00:05:47.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:47.217 Test: blockdev writev readv 8 blocks ...passed 00:05:47.217 Test: blockdev writev readv 30 x 1block ...passed 00:05:47.217 Test: blockdev writev readv block ...passed 00:05:47.217 Test: blockdev writev readv size > 128k ...passed 00:05:47.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:47.218 Test: blockdev comparev and writev ...[2024-11-20 12:38:12.654862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:05:47.218 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d0c3c000 len:0x1000 00:05:47.218 [2024-11-20 12:38:12.655006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:47.218 passed 00:05:47.218 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:38:12.656354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:47.218 [2024-11-20 12:38:12.656382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:47.218 passed 00:05:47.218 Test: blockdev nvme admin passthru ...passed 00:05:47.218 Test: blockdev copy ...passed 00:05:47.218 Suite: bdevio tests on: Nvme2n1 00:05:47.218 Test: blockdev write read block ...passed 00:05:47.218 Test: blockdev write zeroes read block ...passed 00:05:47.218 Test: blockdev write zeroes read no split ...passed 00:05:47.218 Test: blockdev write zeroes read split ...passed 00:05:47.218 Test: blockdev write zeroes read split partial ...passed 00:05:47.218 Test: blockdev reset ...[2024-11-20 12:38:12.718095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:47.218 [2024-11-20 12:38:12.723347] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:05:47.218 00:05:47.218 Test: blockdev write read 8 blocks ...passed 00:05:47.218 Test: blockdev write read size > 128k ...passed 00:05:47.218 Test: blockdev write read invalid size ...passed 00:05:47.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:47.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:47.218 Test: blockdev write read max offset ...passed 00:05:47.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:47.218 Test: blockdev writev readv 8 blocks ...passed 00:05:47.218 Test: blockdev writev readv 30 x 1block ...passed 00:05:47.478 Test: blockdev writev readv block ...passed 00:05:47.478 Test: blockdev writev readv size > 128k ...passed 00:05:47.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:47.478 Test: blockdev comparev and writev ...[2024-11-20 12:38:12.740112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0c38000 len:0x1000 00:05:47.478 [2024-11-20 12:38:12.740163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:47.478 passed 00:05:47.478 Test: blockdev nvme passthru rw ...passed 00:05:47.478 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:38:12.742588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:05:47.478 Test: blockdev nvme admin passthru ...RP2 0x0 00:05:47.478 [2024-11-20 12:38:12.742691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:47.478 passed 00:05:47.478 Test: blockdev copy ...passed 00:05:47.478 Suite: bdevio tests on: Nvme1n1 00:05:47.478 Test: blockdev write read block ...passed 00:05:47.478 Test: blockdev write zeroes read block ...passed 00:05:47.478 Test: blockdev write zeroes read no split ...passed 00:05:47.478 Test: blockdev write zeroes read split ...passed 00:05:47.478 Test: blockdev write zeroes read split partial ...passed 00:05:47.478 Test: blockdev reset ...[2024-11-20 12:38:12.803581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:47.478 [2024-11-20 12:38:12.807498] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:05:47.478 Test: blockdev write read 8 blocks ...uccessful. 00:05:47.478 passed 00:05:47.478 Test: blockdev write read size > 128k ...passed 00:05:47.478 Test: blockdev write read invalid size ...passed 00:05:47.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:47.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:47.478 Test: blockdev write read max offset ...passed 00:05:47.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:47.478 Test: blockdev writev readv 8 blocks ...passed 00:05:47.478 Test: blockdev writev readv 30 x 1block ...passed 00:05:47.478 Test: blockdev writev readv block ...passed 00:05:47.478 Test: blockdev writev readv size > 128k ...passed 00:05:47.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:47.478 Test: blockdev comparev and writev ...[2024-11-20 12:38:12.826842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0c34000 len:0x1000 00:05:47.478 [2024-11-20 12:38:12.826991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:47.478 passed 00:05:47.478 Test: blockdev nvme passthru rw ...passed 00:05:47.478 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:38:12.829910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:47.478 [2024-11-20 12:38:12.829977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:47.478 passed 00:05:47.478 Test: blockdev nvme admin passthru ...passed 00:05:47.478 Test: blockdev copy ...passed 00:05:47.478 Suite: bdevio tests on: Nvme0n1 00:05:47.478 Test: blockdev write read block ...passed 00:05:47.478 Test: blockdev write zeroes read block ...passed 00:05:47.478 Test: blockdev write zeroes read no split ...passed 00:05:47.478 Test: blockdev write zeroes read split ...passed 00:05:47.478 Test: blockdev write zeroes read split partial ...passed 00:05:47.478 Test: blockdev reset ...[2024-11-20 12:38:12.889664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:47.479 [2024-11-20 12:38:12.894521] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:05:47.479 Test: blockdev write read 8 blocks ...uccessful. 00:05:47.479 passed 00:05:47.479 Test: blockdev write read size > 128k ...passed 00:05:47.479 Test: blockdev write read invalid size ...passed 00:05:47.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:47.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:47.479 Test: blockdev write read max offset ...passed 00:05:47.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:47.479 Test: blockdev writev readv 8 blocks ...passed 00:05:47.479 Test: blockdev writev readv 30 x 1block ...passed 00:05:47.479 Test: blockdev writev readv block ...passed 00:05:47.479 Test: blockdev writev readv size > 128k ...passed 00:05:47.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:47.479 Test: blockdev comparev and writev ...passed 00:05:47.479 Test: blockdev nvme passthru rw ...[2024-11-20 12:38:12.910556] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:47.479 separate metadata which is not supported yet. 00:05:47.479 passed 00:05:47.479 Test: blockdev nvme passthru vendor specific ...passed 00:05:47.479 Test: blockdev nvme admin passthru ...[2024-11-20 12:38:12.912179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:47.479 [2024-11-20 12:38:12.912223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:47.479 passed 00:05:47.479 Test: blockdev copy ...passed 00:05:47.479 00:05:47.479 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.479 suites 6 6 n/a 0 0 00:05:47.479 tests 138 138 138 0 0 00:05:47.479 asserts 893 893 893 0 n/a 00:05:47.479 00:05:47.479 Elapsed time = 1.286 seconds 00:05:47.479 0 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59992 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59992 ']' 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59992 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59992 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.479 killing process with pid 59992 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59992' 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59992 00:05:47.479 12:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59992 00:05:50.020 12:38:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:50.020 00:05:50.020 real 0m3.623s 00:05:50.020 user 0m9.692s 00:05:50.020 sys 0m0.353s 00:05:50.020 12:38:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.020 ************************************ 00:05:50.020 12:38:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:50.020 END TEST bdev_bounds 00:05:50.020 ************************************ 00:05:50.020 12:38:15 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:50.020 12:38:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:50.020 12:38:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.020 12:38:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:50.020 ************************************ 00:05:50.020 START TEST bdev_nbd 00:05:50.020 ************************************ 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60052 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60052 /var/tmp/spdk-nbd.sock 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60052 ']' 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.020 12:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:50.020 [2024-11-20 12:38:15.189237] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:50.020 [2024-11-20 12:38:15.191470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:50.020 [2024-11-20 12:38:15.364177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.020 [2024-11-20 12:38:15.481500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:50.586 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:50.843 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:50.844 1+0 records in 00:05:50.844 1+0 records out 00:05:50.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536899 s, 7.6 MB/s 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:50.844 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:51.102 1+0 records in 00:05:51.102 1+0 records out 00:05:51.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848632 s, 4.8 MB/s 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:51.102 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:51.360 1+0 records in 00:05:51.360 1+0 records out 00:05:51.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443522 s, 9.2 MB/s 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:51.360 12:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:51.618 1+0 records in 00:05:51.618 1+0 records out 00:05:51.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091447 s, 4.5 MB/s 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:51.618 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:51.877 1+0 records in 00:05:51.877 1+0 records out 00:05:51.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661128 s, 6.2 MB/s 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:51.877 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:52.137 1+0 records in 00:05:52.137 1+0 records out 00:05:52.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120565 s, 3.4 MB/s 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:52.137 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd0", 00:05:52.397 "bdev_name": "Nvme0n1" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd1", 00:05:52.397 "bdev_name": "Nvme1n1" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd2", 00:05:52.397 "bdev_name": "Nvme2n1" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd3", 00:05:52.397 "bdev_name": "Nvme2n2" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd4", 00:05:52.397 "bdev_name": "Nvme2n3" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd5", 00:05:52.397 "bdev_name": "Nvme3n1" 00:05:52.397 } 00:05:52.397 ]' 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd0", 00:05:52.397 "bdev_name": "Nvme0n1" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd1", 00:05:52.397 "bdev_name": "Nvme1n1" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd2", 00:05:52.397 "bdev_name": "Nvme2n1" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd3", 00:05:52.397 "bdev_name": "Nvme2n2" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd4", 00:05:52.397 "bdev_name": "Nvme2n3" 00:05:52.397 }, 00:05:52.397 { 00:05:52.397 "nbd_device": "/dev/nbd5", 00:05:52.397 "bdev_name": "Nvme3n1" 00:05:52.397 } 00:05:52.397 ]' 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.397 12:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.655 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.966 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.227 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.490 12:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.752 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:54.010 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:54.011 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:54.268 /dev/nbd0 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.268 1+0 records in 00:05:54.268 1+0 records out 00:05:54.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296991 s, 13.8 MB/s 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:54.268 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:54.526 /dev/nbd1 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.527 1+0 records in 00:05:54.527 1+0 records out 00:05:54.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390234 s, 10.5 MB/s 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:54.527 12:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:54.527 /dev/nbd10 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.785 1+0 records in 00:05:54.785 1+0 records out 00:05:54.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397478 s, 10.3 MB/s 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:54.785 /dev/nbd11 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.785 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.785 1+0 records in 00:05:54.785 1+0 records out 00:05:54.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636425 s, 6.4 MB/s 00:05:54.786 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.786 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.786 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:55.044 /dev/nbd12 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.044 1+0 records in 00:05:55.044 1+0 records out 00:05:55.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380433 s, 10.8 MB/s 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:55.044 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:55.303 /dev/nbd13 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.303 1+0 records in 00:05:55.303 1+0 records out 00:05:55.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065151 s, 6.3 MB/s 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.303 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.562 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd0", 00:05:55.562 "bdev_name": "Nvme0n1" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd1", 00:05:55.562 "bdev_name": "Nvme1n1" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd10", 00:05:55.562 "bdev_name": "Nvme2n1" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd11", 00:05:55.562 "bdev_name": "Nvme2n2" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd12", 00:05:55.562 "bdev_name": "Nvme2n3" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd13", 00:05:55.562 "bdev_name": "Nvme3n1" 00:05:55.562 } 00:05:55.562 ]' 00:05:55.562 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd0", 00:05:55.562 "bdev_name": "Nvme0n1" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd1", 00:05:55.562 "bdev_name": "Nvme1n1" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd10", 00:05:55.562 "bdev_name": "Nvme2n1" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd11", 00:05:55.562 "bdev_name": "Nvme2n2" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd12", 00:05:55.562 "bdev_name": "Nvme2n3" 00:05:55.562 }, 00:05:55.562 { 00:05:55.562 "nbd_device": "/dev/nbd13", 00:05:55.562 "bdev_name": "Nvme3n1" 00:05:55.562 } 00:05:55.562 ]' 00:05:55.562 12:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.562 /dev/nbd1 00:05:55.562 /dev/nbd10 00:05:55.562 /dev/nbd11 00:05:55.562 /dev/nbd12 00:05:55.562 /dev/nbd13' 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.562 /dev/nbd1 00:05:55.562 /dev/nbd10 00:05:55.562 /dev/nbd11 00:05:55.562 /dev/nbd12 00:05:55.562 /dev/nbd13' 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:55.562 256+0 records in 00:05:55.562 256+0 records out 00:05:55.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771404 s, 136 MB/s 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.562 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.820 256+0 records in 00:05:55.820 256+0 records out 00:05:55.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606864 s, 17.3 MB/s 00:05:55.820 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.821 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.821 256+0 records in 00:05:55.821 256+0 records out 00:05:55.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.063598 s, 16.5 MB/s 00:05:55.821 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.821 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:55.821 256+0 records in 00:05:55.821 256+0 records out 00:05:55.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0670753 s, 15.6 MB/s 00:05:55.821 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.821 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:55.821 256+0 records in 00:05:55.821 256+0 records out 00:05:55.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631768 s, 16.6 MB/s 00:05:55.821 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.821 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:56.079 256+0 records in 00:05:56.079 256+0 records out 00:05:56.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0630524 s, 16.6 MB/s 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:56.079 256+0 records in 00:05:56.079 256+0 records out 00:05:56.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0641647 s, 16.3 MB/s 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.079 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.337 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.594 12:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.852 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.110 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.367 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:57.625 12:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:57.883 malloc_lvol_verify 00:05:57.883 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:58.141 ae2ce391-df4e-4bdc-85a5-0b9b5ee0d844 00:05:58.141 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:58.141 c074aef3-18a8-4f11-a01c-78d51919c5c1 00:05:58.141 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:58.407 /dev/nbd0 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:58.407 mke2fs 1.47.0 (5-Feb-2023) 00:05:58.407 Discarding device blocks: 0/4096 done 00:05:58.407 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:58.407 00:05:58.407 Allocating group tables: 0/1 done 00:05:58.407 Writing inode tables: 0/1 done 00:05:58.407 Creating journal (1024 blocks): done 00:05:58.407 Writing superblocks and filesystem accounting information: 0/1 done 00:05:58.407 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.407 12:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60052 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60052 ']' 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60052 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60052 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.694 killing process with pid 60052 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60052' 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60052 00:05:58.694 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60052 00:05:59.626 12:38:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:59.626 00:05:59.626 real 0m9.738s 00:05:59.626 user 0m14.018s 00:05:59.626 sys 0m3.125s 00:05:59.626 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.626 12:38:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:59.626 ************************************ 00:05:59.626 END TEST bdev_nbd 00:05:59.626 ************************************ 00:05:59.626 12:38:24 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:59.626 12:38:24 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:59.626 skipping fio tests on NVMe due to multi-ns failures. 00:05:59.626 12:38:24 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:59.626 12:38:24 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:59.626 12:38:24 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:59.626 12:38:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:05:59.626 12:38:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.626 12:38:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.626 ************************************ 00:05:59.626 START TEST bdev_verify 00:05:59.626 ************************************ 00:05:59.626 12:38:24 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:59.626 [2024-11-20 12:38:24.946728] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:05:59.626 [2024-11-20 12:38:24.946858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60430 ] 00:05:59.626 [2024-11-20 12:38:25.106544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.885 [2024-11-20 12:38:25.202232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.885 [2024-11-20 12:38:25.202367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.451 Running I/O for 5 seconds... 00:06:02.758 23488.00 IOPS, 91.75 MiB/s [2024-11-20T12:38:29.212Z] 24320.00 IOPS, 95.00 MiB/s [2024-11-20T12:38:30.144Z] 23616.00 IOPS, 92.25 MiB/s [2024-11-20T12:38:31.123Z] 23216.00 IOPS, 90.69 MiB/s [2024-11-20T12:38:31.123Z] 22707.20 IOPS, 88.70 MiB/s 00:06:05.604 Latency(us) 00:06:05.604 [2024-11-20T12:38:31.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:05.604 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x0 length 0xbd0bd 00:06:05.604 Nvme0n1 : 5.04 1852.34 7.24 0.00 0.00 68854.83 11393.18 70980.53 00:06:05.604 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:05.604 Nvme0n1 : 5.04 1878.75 7.34 0.00 0.00 67881.99 13812.97 75416.81 00:06:05.604 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x0 length 0xa0000 00:06:05.604 Nvme1n1 : 5.05 1851.90 7.23 0.00 0.00 68717.07 12653.49 67350.84 00:06:05.604 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0xa0000 length 0xa0000 00:06:05.604 Nvme1n1 : 5.04 1878.24 7.34 0.00 0.00 67785.20 16535.24 72997.02 00:06:05.604 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x0 length 0x80000 00:06:05.604 Nvme2n1 : 5.05 1851.49 7.23 0.00 0.00 68587.03 12048.54 64124.46 00:06:05.604 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x80000 length 0x80000 00:06:05.604 Nvme2n1 : 5.07 1881.38 7.35 0.00 0.00 67520.73 7057.72 69770.63 00:06:05.604 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x0 length 0x80000 00:06:05.604 Nvme2n2 : 5.08 1865.01 7.29 0.00 0.00 68033.05 9326.28 61704.66 00:06:05.604 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x80000 length 0x80000 00:06:05.604 Nvme2n2 : 5.08 1889.27 7.38 0.00 0.00 67253.24 9578.34 68560.74 00:06:05.604 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x0 length 0x80000 00:06:05.604 Nvme2n3 : 5.08 1864.61 7.28 0.00 0.00 67910.57 9527.93 64931.05 00:06:05.604 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x80000 length 0x80000 00:06:05.604 Nvme2n3 : 5.08 1887.92 7.37 0.00 0.00 67156.53 12300.60 71787.13 00:06:05.604 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x0 length 0x20000 00:06:05.604 Nvme3n1 : 5.08 1864.13 7.28 0.00 0.00 67806.70 8217.21 68560.74 00:06:05.604 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:05.604 Verification LBA range: start 0x20000 length 0x20000 00:06:05.604 Nvme3n1 : 5.09 1887.39 7.37 0.00 0.00 67026.49 7713.08 75820.11 00:06:05.604 [2024-11-20T12:38:31.123Z] =================================================================================================================== 00:06:05.604 [2024-11-20T12:38:31.123Z] Total : 22452.45 87.70 0.00 0.00 67872.27 7057.72 75820.11 00:06:07.504 00:06:07.504 real 0m7.633s 00:06:07.504 user 0m14.395s 00:06:07.504 sys 0m0.209s 00:06:07.504 12:38:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.504 12:38:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:07.504 ************************************ 00:06:07.504 END TEST bdev_verify 00:06:07.504 ************************************ 00:06:07.504 12:38:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:07.504 12:38:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:07.504 12:38:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.504 12:38:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.504 ************************************ 00:06:07.504 START TEST bdev_verify_big_io 00:06:07.504 ************************************ 00:06:07.504 12:38:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:07.504 [2024-11-20 12:38:32.619214] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:07.504 [2024-11-20 12:38:32.619327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60523 ] 00:06:07.504 [2024-11-20 12:38:32.774262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.504 [2024-11-20 12:38:32.856045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.504 [2024-11-20 12:38:32.856103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.070 Running I/O for 5 seconds... 00:06:13.892 1103.00 IOPS, 68.94 MiB/s [2024-11-20T12:38:39.671Z] 2217.50 IOPS, 138.59 MiB/s [2024-11-20T12:38:39.933Z] 2363.33 IOPS, 147.71 MiB/s 00:06:14.414 Latency(us) 00:06:14.414 [2024-11-20T12:38:39.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.414 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x0 length 0xbd0b 00:06:14.414 Nvme0n1 : 5.94 82.59 5.16 0.00 0.00 1497350.06 9880.81 1477685.56 00:06:14.414 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:14.414 Nvme0n1 : 5.73 122.76 7.67 0.00 0.00 974096.40 65737.65 942105.21 00:06:14.414 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x0 length 0xa000 00:06:14.414 Nvme1n1 : 5.95 82.74 5.17 0.00 0.00 1428210.19 68964.04 1309913.40 00:06:14.414 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0xa000 length 0xa000 00:06:14.414 Nvme1n1 : 5.84 127.37 7.96 0.00 0.00 918130.36 44564.48 967916.31 00:06:14.414 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x0 length 0x8000 00:06:14.414 Nvme2n1 : 5.95 86.05 5.38 0.00 0.00 1317644.60 29440.79 1329271.73 00:06:14.414 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x8000 length 0x8000 00:06:14.414 Nvme2n1 : 5.84 126.15 7.88 0.00 0.00 895760.49 40329.85 993727.41 00:06:14.414 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x0 length 0x8000 00:06:14.414 Nvme2n2 : 5.98 88.48 5.53 0.00 0.00 1218923.95 24802.86 1348630.06 00:06:14.414 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x8000 length 0x8000 00:06:14.414 Nvme2n2 : 5.84 131.40 8.21 0.00 0.00 841555.76 58881.58 1025991.29 00:06:14.414 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x0 length 0x8000 00:06:14.414 Nvme2n3 : 6.08 115.80 7.24 0.00 0.00 908236.05 10586.58 1374441.16 00:06:14.414 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x8000 length 0x8000 00:06:14.414 Nvme2n3 : 5.91 147.33 9.21 0.00 0.00 732444.18 4083.40 1038896.84 00:06:14.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x0 length 0x2000 00:06:14.414 Nvme3n1 : 6.21 178.20 11.14 0.00 0.00 563924.33 272.54 3006993.33 00:06:14.414 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:14.414 Verification LBA range: start 0x2000 length 0x2000 00:06:14.414 Nvme3n1 : 5.67 115.97 7.25 0.00 0.00 1036238.03 17241.01 1000180.18 00:06:14.414 [2024-11-20T12:38:39.933Z] =================================================================================================================== 00:06:14.414 [2024-11-20T12:38:39.933Z] Total : 1404.87 87.80 0.00 0.00 963864.34 272.54 3006993.33 00:06:16.958 00:06:16.958 real 0m9.541s 00:06:16.958 user 0m18.175s 00:06:16.958 sys 0m0.214s 00:06:16.958 12:38:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.958 ************************************ 00:06:16.958 END TEST bdev_verify_big_io 00:06:16.958 ************************************ 00:06:16.958 12:38:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:16.958 12:38:42 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.958 12:38:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:16.958 12:38:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.958 12:38:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.958 ************************************ 00:06:16.958 START TEST bdev_write_zeroes 00:06:16.958 ************************************ 00:06:16.958 12:38:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.958 [2024-11-20 12:38:42.206082] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:16.958 [2024-11-20 12:38:42.206175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60636 ] 00:06:16.958 [2024-11-20 12:38:42.361637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.958 [2024-11-20 12:38:42.464856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.531 Running I/O for 1 seconds... 00:06:19.861 7277.00 IOPS, 28.43 MiB/s [2024-11-20T12:38:46.387Z] 3986.00 IOPS, 15.57 MiB/s [2024-11-20T12:38:46.387Z] 2659.33 IOPS, 10.39 MiB/s 00:06:20.868 Latency(us) 00:06:20.868 [2024-11-20T12:38:46.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:20.868 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.869 Nvme0n1 : 3.30 296.45 1.16 0.00 0.00 306590.52 5570.56 2374621.34 00:06:20.869 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.869 Nvme1n1 : 1.23 1193.62 4.66 0.00 0.00 107051.70 9175.04 293601.28 00:06:20.869 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.869 Nvme2n1 : 1.23 1109.81 4.34 0.00 0.00 114833.46 9477.51 293601.28 00:06:20.869 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.869 Nvme2n2 : 1.24 1139.51 4.45 0.00 0.00 111564.26 9477.51 293601.28 00:06:20.869 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.869 Nvme2n3 : 1.24 1138.43 4.45 0.00 0.00 111530.61 9376.69 293601.28 00:06:20.869 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:20.869 Nvme3n1 : 1.24 1137.36 4.44 0.00 0.00 111471.44 9225.45 290374.89 00:06:20.869 [2024-11-20T12:38:46.388Z] =================================================================================================================== 00:06:20.869 [2024-11-20T12:38:46.388Z] Total : 6015.19 23.50 0.00 0.00 135006.02 5570.56 2374621.34 00:06:22.257 00:06:22.257 real 0m5.205s 00:06:22.257 user 0m4.877s 00:06:22.257 sys 0m0.198s 00:06:22.257 12:38:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.257 12:38:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:22.257 ************************************ 00:06:22.257 END TEST bdev_write_zeroes 00:06:22.257 ************************************ 00:06:22.257 12:38:47 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.257 12:38:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:22.257 12:38:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.257 12:38:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.257 ************************************ 00:06:22.257 START TEST bdev_json_nonenclosed 00:06:22.257 ************************************ 00:06:22.257 12:38:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.257 [2024-11-20 12:38:47.477334] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:22.257 [2024-11-20 12:38:47.477438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60711 ] 00:06:22.257 [2024-11-20 12:38:47.634161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.257 [2024-11-20 12:38:47.733961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.257 [2024-11-20 12:38:47.734026] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:22.257 [2024-11-20 12:38:47.734042] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:22.257 [2024-11-20 12:38:47.734052] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.518 00:06:22.518 real 0m0.497s 00:06:22.518 user 0m0.304s 00:06:22.518 sys 0m0.089s 00:06:22.518 12:38:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.518 ************************************ 00:06:22.518 END TEST bdev_json_nonenclosed 00:06:22.518 ************************************ 00:06:22.518 12:38:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:22.518 12:38:47 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.518 12:38:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:22.518 12:38:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.518 12:38:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.518 ************************************ 00:06:22.518 START TEST bdev_json_nonarray 00:06:22.518 ************************************ 00:06:22.518 12:38:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.780 [2024-11-20 12:38:48.044660] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:22.780 [2024-11-20 12:38:48.044806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60742 ] 00:06:22.780 [2024-11-20 12:38:48.202013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.780 [2024-11-20 12:38:48.282191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.780 [2024-11-20 12:38:48.282264] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:22.780 [2024-11-20 12:38:48.282278] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:22.780 [2024-11-20 12:38:48.282286] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.042 00:06:23.042 real 0m0.452s 00:06:23.042 user 0m0.246s 00:06:23.042 sys 0m0.102s 00:06:23.042 12:38:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.042 12:38:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:23.042 ************************************ 00:06:23.042 END TEST bdev_json_nonarray 00:06:23.042 ************************************ 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:23.042 12:38:48 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:23.042 00:06:23.042 real 0m41.776s 00:06:23.042 user 1m6.207s 00:06:23.042 sys 0m5.167s 00:06:23.042 12:38:48 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.042 ************************************ 00:06:23.042 END TEST blockdev_nvme 00:06:23.042 ************************************ 00:06:23.042 12:38:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.042 12:38:48 -- spdk/autotest.sh@209 -- # uname -s 00:06:23.042 12:38:48 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:23.042 12:38:48 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:23.042 12:38:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.042 12:38:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.042 12:38:48 -- common/autotest_common.sh@10 -- # set +x 00:06:23.042 ************************************ 00:06:23.042 START TEST blockdev_nvme_gpt 00:06:23.042 ************************************ 00:06:23.042 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:23.304 * Looking for test storage... 00:06:23.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:23.304 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.304 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.304 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.304 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:23.304 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.305 12:38:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.305 --rc genhtml_branch_coverage=1 00:06:23.305 --rc genhtml_function_coverage=1 00:06:23.305 --rc genhtml_legend=1 00:06:23.305 --rc geninfo_all_blocks=1 00:06:23.305 --rc geninfo_unexecuted_blocks=1 00:06:23.305 00:06:23.305 ' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.305 --rc genhtml_branch_coverage=1 00:06:23.305 --rc genhtml_function_coverage=1 00:06:23.305 --rc genhtml_legend=1 00:06:23.305 --rc geninfo_all_blocks=1 00:06:23.305 --rc geninfo_unexecuted_blocks=1 00:06:23.305 00:06:23.305 ' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.305 --rc genhtml_branch_coverage=1 00:06:23.305 --rc genhtml_function_coverage=1 00:06:23.305 --rc genhtml_legend=1 00:06:23.305 --rc geninfo_all_blocks=1 00:06:23.305 --rc geninfo_unexecuted_blocks=1 00:06:23.305 00:06:23.305 ' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.305 --rc genhtml_branch_coverage=1 00:06:23.305 --rc genhtml_function_coverage=1 00:06:23.305 --rc genhtml_legend=1 00:06:23.305 --rc geninfo_all_blocks=1 00:06:23.305 --rc geninfo_unexecuted_blocks=1 00:06:23.305 00:06:23.305 ' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60821 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60821 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60821 ']' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.305 12:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:23.305 12:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:23.305 [2024-11-20 12:38:48.806719] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:23.305 [2024-11-20 12:38:48.806883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60821 ] 00:06:23.566 [2024-11-20 12:38:48.966622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.827 [2024-11-20 12:38:49.091326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.399 12:38:49 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.399 12:38:49 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:24.399 12:38:49 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:24.399 12:38:49 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:24.399 12:38:49 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:24.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.920 Waiting for block devices as requested 00:06:24.920 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:24.920 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:25.181 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:25.181 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.473 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:30.473 BYT; 00:06:30.473 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:30.473 BYT; 00:06:30.473 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:30.473 12:38:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:30.473 12:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:31.416 The operation has completed successfully. 00:06:31.416 12:38:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:32.362 The operation has completed successfully. 00:06:32.362 12:38:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:32.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:33.508 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.508 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.508 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.508 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.770 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:33.770 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.770 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:33.770 [] 00:06:33.770 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.770 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:33.770 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:33.770 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:33.770 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:33.770 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:33.770 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.770 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:34.032 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.032 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.294 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.294 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:34.294 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:34.295 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1e39f025-fb9c-414c-b111-086e3f62f822"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1e39f025-fb9c-414c-b111-086e3f62f822",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7d264c64-d16d-4774-a9d1-82dd45b5349f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d264c64-d16d-4774-a9d1-82dd45b5349f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "12f69784-13e3-44b9-9f55-53d08d7fcde0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "12f69784-13e3-44b9-9f55-53d08d7fcde0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2d952425-61b0-44b9-b1bd-8bd69e47b506"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2d952425-61b0-44b9-b1bd-8bd69e47b506",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c8d201eb-09e0-4188-9c78-b31a740a53fe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c8d201eb-09e0-4188-9c78-b31a740a53fe",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:34.295 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:34.295 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:34.295 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:34.295 12:38:59 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60821 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60821 ']' 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60821 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60821 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.295 killing process with pid 60821 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60821' 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60821 00:06:34.295 12:38:59 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60821 00:06:36.211 12:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:36.211 12:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:36.211 12:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:36.211 12:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.211 12:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.211 ************************************ 00:06:36.211 START TEST bdev_hello_world 00:06:36.211 ************************************ 00:06:36.211 12:39:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:36.211 [2024-11-20 12:39:01.387172] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:36.211 [2024-11-20 12:39:01.387314] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61452 ] 00:06:36.211 [2024-11-20 12:39:01.559460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.211 [2024-11-20 12:39:01.695714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.789 [2024-11-20 12:39:02.285582] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:36.789 [2024-11-20 12:39:02.285648] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:36.789 [2024-11-20 12:39:02.285677] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:36.789 [2024-11-20 12:39:02.288532] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:36.789 [2024-11-20 12:39:02.290117] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:36.789 [2024-11-20 12:39:02.290164] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:36.789 [2024-11-20 12:39:02.290653] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:36.789 00:06:36.789 [2024-11-20 12:39:02.290685] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:38.183 00:06:38.183 real 0m1.956s 00:06:38.183 user 0m1.564s 00:06:38.183 sys 0m0.282s 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.183 ************************************ 00:06:38.183 END TEST bdev_hello_world 00:06:38.183 ************************************ 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:38.183 12:39:03 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:38.183 12:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.183 12:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.183 12:39:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.183 ************************************ 00:06:38.183 START TEST bdev_bounds 00:06:38.183 ************************************ 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:38.183 Process bdevio pid: 61494 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61494 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61494' 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61494 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61494 ']' 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:38.183 12:39:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:38.183 [2024-11-20 12:39:03.407610] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:38.183 [2024-11-20 12:39:03.407757] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:06:38.184 [2024-11-20 12:39:03.568064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.445 [2024-11-20 12:39:03.702583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.445 [2024-11-20 12:39:03.702891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.445 [2024-11-20 12:39:03.702930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.018 12:39:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.018 12:39:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:39.018 12:39:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:39.018 I/O targets: 00:06:39.018 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:39.018 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:39.018 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:39.018 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.018 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.018 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.018 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:39.018 00:06:39.018 00:06:39.018 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.018 http://cunit.sourceforge.net/ 00:06:39.018 00:06:39.018 00:06:39.018 Suite: bdevio tests on: Nvme3n1 00:06:39.018 Test: blockdev write read block ...passed 00:06:39.018 Test: blockdev write zeroes read block ...passed 00:06:39.018 Test: blockdev write zeroes read no split ...passed 00:06:39.018 Test: blockdev write zeroes read split ...passed 00:06:39.279 Test: blockdev write zeroes read split partial ...passed 00:06:39.279 Test: blockdev reset ...[2024-11-20 12:39:04.537889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:39.279 [2024-11-20 12:39:04.544198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:39.279 passed 00:06:39.279 Test: blockdev write read 8 blocks ...passed 00:06:39.279 Test: blockdev write read size > 128k ...passed 00:06:39.279 Test: blockdev write read invalid size ...passed 00:06:39.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.279 Test: blockdev write read max offset ...passed 00:06:39.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.279 Test: blockdev writev readv 8 blocks ...passed 00:06:39.279 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.279 Test: blockdev writev readv block ...passed 00:06:39.279 Test: blockdev writev readv size > 128k ...passed 00:06:39.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.279 Test: blockdev comparev and writev ...[2024-11-20 12:39:04.565255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1004000 len:0x1000 00:06:39.279 [2024-11-20 12:39:04.565364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.279 passed 00:06:39.279 Test: blockdev nvme passthru rw ...passed 00:06:39.279 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.279 Test: blockdev nvme admin passthru ...[2024-11-20 12:39:04.567821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.279 [2024-11-20 12:39:04.567867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.279 passed 00:06:39.279 Test: blockdev copy ...passed 00:06:39.279 Suite: bdevio tests on: Nvme2n3 00:06:39.279 Test: blockdev write read block ...passed 00:06:39.279 Test: blockdev write zeroes read block ...passed 00:06:39.279 Test: blockdev write zeroes read no split ...passed 00:06:39.279 Test: blockdev write zeroes read split ...passed 00:06:39.279 Test: blockdev write zeroes read split partial ...passed 00:06:39.279 Test: blockdev reset ...[2024-11-20 12:39:04.645036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.279 passed 00:06:39.279 Test: blockdev write read 8 blocks ...[2024-11-20 12:39:04.652048] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.279 passed 00:06:39.279 Test: blockdev write read size > 128k ...passed 00:06:39.279 Test: blockdev write read invalid size ...passed 00:06:39.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.279 Test: blockdev write read max offset ...passed 00:06:39.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.279 Test: blockdev writev readv 8 blocks ...passed 00:06:39.279 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.279 Test: blockdev writev readv block ...passed 00:06:39.279 Test: blockdev writev readv size > 128k ...passed 00:06:39.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.279 Test: blockdev comparev and writev ...[2024-11-20 12:39:04.675773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1002000 len:0x1000 00:06:39.279 [2024-11-20 12:39:04.675850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.279 passed 00:06:39.279 Test: blockdev nvme passthru rw ...passed 00:06:39.279 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:39:04.678468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.279 [2024-11-20 12:39:04.678529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.279 passed 00:06:39.279 Test: blockdev nvme admin passthru ...passed 00:06:39.279 Test: blockdev copy ...passed 00:06:39.279 Suite: bdevio tests on: Nvme2n2 00:06:39.279 Test: blockdev write read block ...passed 00:06:39.279 Test: blockdev write zeroes read block ...passed 00:06:39.279 Test: blockdev write zeroes read no split ...passed 00:06:39.279 Test: blockdev write zeroes read split ...passed 00:06:39.279 Test: blockdev write zeroes read split partial ...passed 00:06:39.279 Test: blockdev reset ...[2024-11-20 12:39:04.741972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.279 [2024-11-20 12:39:04.748990] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.279 passed 00:06:39.279 Test: blockdev write read 8 blocks ...passed 00:06:39.279 Test: blockdev write read size > 128k ...passed 00:06:39.279 Test: blockdev write read invalid size ...passed 00:06:39.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.279 Test: blockdev write read max offset ...passed 00:06:39.279 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.279 Test: blockdev writev readv 8 blocks ...passed 00:06:39.279 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.279 Test: blockdev writev readv block ...passed 00:06:39.279 Test: blockdev writev readv size > 128k ...passed 00:06:39.279 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.280 Test: blockdev comparev and writev ...[2024-11-20 12:39:04.769534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6838000 len:0x1000 00:06:39.280 [2024-11-20 12:39:04.769634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.280 passed 00:06:39.280 Test: blockdev nvme passthru rw ...passed 00:06:39.280 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:39:04.772600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.280 [2024-11-20 12:39:04.772641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.280 passed 00:06:39.280 Test: blockdev nvme admin passthru ...passed 00:06:39.280 Test: blockdev copy ...passed 00:06:39.280 Suite: bdevio tests on: Nvme2n1 00:06:39.280 Test: blockdev write read block ...passed 00:06:39.280 Test: blockdev write zeroes read block ...passed 00:06:39.541 Test: blockdev write zeroes read no split ...passed 00:06:39.541 Test: blockdev write zeroes read split ...passed 00:06:39.541 Test: blockdev write zeroes read split partial ...passed 00:06:39.541 Test: blockdev reset ...[2024-11-20 12:39:04.841708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.541 [2024-11-20 12:39:04.848204] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.541 passed 00:06:39.541 Test: blockdev write read 8 blocks ...passed 00:06:39.541 Test: blockdev write read size > 128k ...passed 00:06:39.541 Test: blockdev write read invalid size ...passed 00:06:39.541 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.541 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.541 Test: blockdev write read max offset ...passed 00:06:39.541 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.541 Test: blockdev writev readv 8 blocks ...passed 00:06:39.541 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.541 Test: blockdev writev readv block ...passed 00:06:39.541 Test: blockdev writev readv size > 128k ...passed 00:06:39.541 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.541 Test: blockdev comparev and writev ...[2024-11-20 12:39:04.872734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6834000 len:0x1000 00:06:39.541 [2024-11-20 12:39:04.872863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.541 passed 00:06:39.541 Test: blockdev nvme passthru rw ...passed 00:06:39.542 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:39:04.875941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.542 passed 00:06:39.542 Test: blockdev nvme admin passthru ...[2024-11-20 12:39:04.875992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.542 passed 00:06:39.542 Test: blockdev copy ...passed 00:06:39.542 Suite: bdevio tests on: Nvme1n1p2 00:06:39.542 Test: blockdev write read block ...passed 00:06:39.542 Test: blockdev write zeroes read block ...passed 00:06:39.542 Test: blockdev write zeroes read no split ...passed 00:06:39.542 Test: blockdev write zeroes read split ...passed 00:06:39.542 Test: blockdev write zeroes read split partial ...passed 00:06:39.542 Test: blockdev reset ...[2024-11-20 12:39:04.944711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:39.542 [2024-11-20 12:39:04.950348] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:39.542 passed 00:06:39.542 Test: blockdev write read 8 blocks ...passed 00:06:39.542 Test: blockdev write read size > 128k ...passed 00:06:39.542 Test: blockdev write read invalid size ...passed 00:06:39.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.542 Test: blockdev write read max offset ...passed 00:06:39.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.542 Test: blockdev writev readv 8 blocks ...passed 00:06:39.542 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.542 Test: blockdev writev readv block ...passed 00:06:39.542 Test: blockdev writev readv size > 128k ...passed 00:06:39.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.542 Test: blockdev comparev and writev ...[2024-11-20 12:39:04.970560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d6830000 len:0x1000 00:06:39.542 [2024-11-20 12:39:04.970641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.542 passed 00:06:39.542 Test: blockdev nvme passthru rw ...passed 00:06:39.542 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.542 Test: blockdev nvme admin passthru ...passed 00:06:39.542 Test: blockdev copy ...passed 00:06:39.542 Suite: bdevio tests on: Nvme1n1p1 00:06:39.542 Test: blockdev write read block ...passed 00:06:39.542 Test: blockdev write zeroes read block ...passed 00:06:39.542 Test: blockdev write zeroes read no split ...passed 00:06:39.542 Test: blockdev write zeroes read split ...passed 00:06:39.542 Test: blockdev write zeroes read split partial ...passed 00:06:39.542 Test: blockdev reset ...[2024-11-20 12:39:05.033316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:39.542 [2024-11-20 12:39:05.039402] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:39.542 passed 00:06:39.542 Test: blockdev write read 8 blocks ...passed 00:06:39.542 Test: blockdev write read size > 128k ...passed 00:06:39.542 Test: blockdev write read invalid size ...passed 00:06:39.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.542 Test: blockdev write read max offset ...passed 00:06:39.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.542 Test: blockdev writev readv 8 blocks ...passed 00:06:39.542 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.542 Test: blockdev writev readv block ...passed 00:06:39.542 Test: blockdev writev readv size > 128k ...passed 00:06:39.803 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.803 Test: blockdev comparev and writev ...[2024-11-20 12:39:05.060086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c0e0e000 len:0x1000 00:06:39.803 [2024-11-20 12:39:05.060175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.803 passed 00:06:39.803 Test: blockdev nvme passthru rw ...passed 00:06:39.803 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.803 Test: blockdev nvme admin passthru ...passed 00:06:39.803 Test: blockdev copy ...passed 00:06:39.803 Suite: bdevio tests on: Nvme0n1 00:06:39.803 Test: blockdev write read block ...passed 00:06:39.803 Test: blockdev write zeroes read block ...passed 00:06:39.803 Test: blockdev write zeroes read no split ...passed 00:06:39.803 Test: blockdev write zeroes read split ...passed 00:06:39.803 Test: blockdev write zeroes read split partial ...passed 00:06:39.803 Test: blockdev reset ...[2024-11-20 12:39:05.123730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:39.803 passed 00:06:39.803 Test: blockdev write read 8 blocks ...[2024-11-20 12:39:05.129963] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:39.803 passed 00:06:39.803 Test: blockdev write read size > 128k ...passed 00:06:39.803 Test: blockdev write read invalid size ...passed 00:06:39.803 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.803 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.803 Test: blockdev write read max offset ...passed 00:06:39.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.804 Test: blockdev writev readv 8 blocks ...passed 00:06:39.804 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.804 Test: blockdev writev readv block ...passed 00:06:39.804 Test: blockdev writev readv size > 128k ...passed 00:06:39.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.804 Test: blockdev comparev and writev ...[2024-11-20 12:39:05.146677] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:39.804 separate metadata which is not supported yet. 00:06:39.804 passed 00:06:39.804 Test: blockdev nvme passthru rw ...passed 00:06:39.804 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:39:05.148664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:39.804 [2024-11-20 12:39:05.148713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:39.804 passed 00:06:39.804 Test: blockdev nvme admin passthru ...passed 00:06:39.804 Test: blockdev copy ...passed 00:06:39.804 00:06:39.804 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.804 suites 7 7 n/a 0 0 00:06:39.804 tests 161 161 161 0 0 00:06:39.804 asserts 1025 1025 1025 0 n/a 00:06:39.804 00:06:39.804 Elapsed time = 1.684 seconds 00:06:39.804 0 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61494 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61494 ']' 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61494 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61494 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.804 killing process with pid 61494 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61494' 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61494 00:06:39.804 12:39:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61494 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:41.719 00:06:41.719 real 0m3.704s 00:06:41.719 user 0m9.698s 00:06:41.719 sys 0m0.438s 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.719 ************************************ 00:06:41.719 END TEST bdev_bounds 00:06:41.719 ************************************ 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:41.719 12:39:07 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:41.719 12:39:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:41.719 12:39:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.719 12:39:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.719 ************************************ 00:06:41.719 START TEST bdev_nbd 00:06:41.719 ************************************ 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61559 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61559 /var/tmp/spdk-nbd.sock 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61559 ']' 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.719 12:39:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:41.719 [2024-11-20 12:39:07.200077] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:06:41.719 [2024-11-20 12:39:07.200221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.981 [2024-11-20 12:39:07.367848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.242 [2024-11-20 12:39:07.535816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.815 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.207 1+0 records in 00:06:43.207 1+0 records out 00:06:43.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123979 s, 3.3 MB/s 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.207 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.470 1+0 records in 00:06:43.470 1+0 records out 00:06:43.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667069 s, 6.1 MB/s 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.470 12:39:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.732 1+0 records in 00:06:43.732 1+0 records out 00:06:43.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00152172 s, 2.7 MB/s 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.732 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.733 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.993 1+0 records in 00:06:43.993 1+0 records out 00:06:43.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121174 s, 3.4 MB/s 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.993 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.255 1+0 records in 00:06:44.255 1+0 records out 00:06:44.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108149 s, 3.8 MB/s 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:44.255 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.516 1+0 records in 00:06:44.516 1+0 records out 00:06:44.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108472 s, 3.8 MB/s 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:44.516 12:39:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.778 1+0 records in 00:06:44.778 1+0 records out 00:06:44.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00164328 s, 2.5 MB/s 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:44.778 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd0", 00:06:45.039 "bdev_name": "Nvme0n1" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd1", 00:06:45.039 "bdev_name": "Nvme1n1p1" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd2", 00:06:45.039 "bdev_name": "Nvme1n1p2" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd3", 00:06:45.039 "bdev_name": "Nvme2n1" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd4", 00:06:45.039 "bdev_name": "Nvme2n2" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd5", 00:06:45.039 "bdev_name": "Nvme2n3" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd6", 00:06:45.039 "bdev_name": "Nvme3n1" 00:06:45.039 } 00:06:45.039 ]' 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd0", 00:06:45.039 "bdev_name": "Nvme0n1" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd1", 00:06:45.039 "bdev_name": "Nvme1n1p1" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd2", 00:06:45.039 "bdev_name": "Nvme1n1p2" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd3", 00:06:45.039 "bdev_name": "Nvme2n1" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd4", 00:06:45.039 "bdev_name": "Nvme2n2" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd5", 00:06:45.039 "bdev_name": "Nvme2n3" 00:06:45.039 }, 00:06:45.039 { 00:06:45.039 "nbd_device": "/dev/nbd6", 00:06:45.039 "bdev_name": "Nvme3n1" 00:06:45.039 } 00:06:45.039 ]' 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.039 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.301 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.563 12:39:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.824 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.086 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.347 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.609 12:39:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.870 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:47.132 /dev/nbd0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.132 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.394 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.394 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.394 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.394 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.394 1+0 records in 00:06:47.394 1+0 records out 00:06:47.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114777 s, 3.6 MB/s 00:06:47.394 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.394 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.395 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.395 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.395 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.395 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.395 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.395 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:47.395 /dev/nbd1 00:06:47.655 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.655 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.656 1+0 records in 00:06:47.656 1+0 records out 00:06:47.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00144958 s, 2.8 MB/s 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.656 12:39:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:47.656 /dev/nbd10 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.916 1+0 records in 00:06:47.916 1+0 records out 00:06:47.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00170781 s, 2.4 MB/s 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.916 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:47.916 /dev/nbd11 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.178 1+0 records in 00:06:48.178 1+0 records out 00:06:48.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107087 s, 3.8 MB/s 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.178 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:48.439 /dev/nbd12 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.439 1+0 records in 00:06:48.439 1+0 records out 00:06:48.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997527 s, 4.1 MB/s 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.439 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:48.698 /dev/nbd13 00:06:48.698 12:39:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.698 1+0 records in 00:06:48.698 1+0 records out 00:06:48.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000964377 s, 4.2 MB/s 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.698 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:48.962 /dev/nbd14 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.962 1+0 records in 00:06:48.962 1+0 records out 00:06:48.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149662 s, 2.7 MB/s 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.962 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd0", 00:06:49.229 "bdev_name": "Nvme0n1" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd1", 00:06:49.229 "bdev_name": "Nvme1n1p1" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd10", 00:06:49.229 "bdev_name": "Nvme1n1p2" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd11", 00:06:49.229 "bdev_name": "Nvme2n1" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd12", 00:06:49.229 "bdev_name": "Nvme2n2" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd13", 00:06:49.229 "bdev_name": "Nvme2n3" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd14", 00:06:49.229 "bdev_name": "Nvme3n1" 00:06:49.229 } 00:06:49.229 ]' 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd0", 00:06:49.229 "bdev_name": "Nvme0n1" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd1", 00:06:49.229 "bdev_name": "Nvme1n1p1" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd10", 00:06:49.229 "bdev_name": "Nvme1n1p2" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd11", 00:06:49.229 "bdev_name": "Nvme2n1" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd12", 00:06:49.229 "bdev_name": "Nvme2n2" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd13", 00:06:49.229 "bdev_name": "Nvme2n3" 00:06:49.229 }, 00:06:49.229 { 00:06:49.229 "nbd_device": "/dev/nbd14", 00:06:49.229 "bdev_name": "Nvme3n1" 00:06:49.229 } 00:06:49.229 ]' 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.229 /dev/nbd1 00:06:49.229 /dev/nbd10 00:06:49.229 /dev/nbd11 00:06:49.229 /dev/nbd12 00:06:49.229 /dev/nbd13 00:06:49.229 /dev/nbd14' 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.229 /dev/nbd1 00:06:49.229 /dev/nbd10 00:06:49.229 /dev/nbd11 00:06:49.229 /dev/nbd12 00:06:49.229 /dev/nbd13 00:06:49.229 /dev/nbd14' 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.229 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.230 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:49.230 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.230 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:49.230 256+0 records in 00:06:49.230 256+0 records out 00:06:49.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00830989 s, 126 MB/s 00:06:49.230 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.230 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.491 256+0 records in 00:06:49.491 256+0 records out 00:06:49.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225696 s, 4.6 MB/s 00:06:49.491 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.491 12:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.753 256+0 records in 00:06:49.753 256+0 records out 00:06:49.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234773 s, 4.5 MB/s 00:06:49.753 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.753 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:50.017 256+0 records in 00:06:50.017 256+0 records out 00:06:50.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248765 s, 4.2 MB/s 00:06:50.017 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.017 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:50.277 256+0 records in 00:06:50.277 256+0 records out 00:06:50.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.254198 s, 4.1 MB/s 00:06:50.277 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.277 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:50.537 256+0 records in 00:06:50.537 256+0 records out 00:06:50.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.255954 s, 4.1 MB/s 00:06:50.537 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.537 12:39:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:50.798 256+0 records in 00:06:50.798 256+0 records out 00:06:50.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250379 s, 4.2 MB/s 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:50.798 256+0 records in 00:06:50.798 256+0 records out 00:06:50.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231989 s, 4.5 MB/s 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.798 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.059 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.319 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.579 12:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:51.840 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:51.840 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:51.840 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:51.840 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.840 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.840 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:51.840 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.841 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.100 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.362 12:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.623 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.624 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.624 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:52.886 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:53.147 malloc_lvol_verify 00:06:53.147 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:53.409 709fb7ba-3e2b-44d1-a368-5048a9e27f29 00:06:53.410 12:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:53.671 8f0fcd18-6ee5-4211-9416-60cf4471ba4c 00:06:53.671 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:53.931 /dev/nbd0 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:53.931 mke2fs 1.47.0 (5-Feb-2023) 00:06:53.931 Discarding device blocks: 0/4096 done 00:06:53.931 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:53.931 00:06:53.931 Allocating group tables: 0/1 done 00:06:53.931 Writing inode tables: 0/1 done 00:06:53.931 Creating journal (1024 blocks): done 00:06:53.931 Writing superblocks and filesystem accounting information: 0/1 done 00:06:53.931 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.931 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.932 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61559 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61559 ']' 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61559 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61559 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61559' 00:06:54.193 killing process with pid 61559 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61559 00:06:54.193 12:39:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61559 00:07:02.342 12:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:02.342 00:07:02.342 real 0m19.807s 00:07:02.342 user 0m22.924s 00:07:02.342 sys 0m6.136s 00:07:02.342 12:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.342 ************************************ 00:07:02.342 END TEST bdev_nbd 00:07:02.342 ************************************ 00:07:02.342 12:39:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.342 skipping fio tests on NVMe due to multi-ns failures. 00:07:02.342 12:39:26 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:02.342 12:39:26 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:02.342 12:39:26 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:02.342 12:39:26 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:02.342 12:39:26 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:02.342 12:39:26 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:02.342 12:39:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:02.342 12:39:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.342 12:39:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.342 ************************************ 00:07:02.342 START TEST bdev_verify 00:07:02.342 ************************************ 00:07:02.342 12:39:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:02.343 [2024-11-20 12:39:27.065988] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:07:02.343 [2024-11-20 12:39:27.066142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61996 ] 00:07:02.343 [2024-11-20 12:39:27.231346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.343 [2024-11-20 12:39:27.373609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.343 [2024-11-20 12:39:27.373868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.604 Running I/O for 5 seconds... 00:07:04.936 16768.00 IOPS, 65.50 MiB/s [2024-11-20T12:39:31.398Z] 16832.00 IOPS, 65.75 MiB/s [2024-11-20T12:39:32.343Z] 16832.00 IOPS, 65.75 MiB/s [2024-11-20T12:39:33.287Z] 16672.00 IOPS, 65.12 MiB/s [2024-11-20T12:39:33.287Z] 16537.60 IOPS, 64.60 MiB/s 00:07:07.768 Latency(us) 00:07:07.768 [2024-11-20T12:39:33.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.768 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.768 Verification LBA range: start 0x0 length 0xbd0bd 00:07:07.768 Nvme0n1 : 5.09 1182.07 4.62 0.00 0.00 108075.52 18350.08 88725.66 00:07:07.768 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.768 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:07.768 Nvme0n1 : 5.07 1160.38 4.53 0.00 0.00 110031.08 28029.24 86305.87 00:07:07.768 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.768 Verification LBA range: start 0x0 length 0x4ff80 00:07:07.768 Nvme1n1p1 : 5.09 1181.68 4.62 0.00 0.00 107749.89 16434.41 85499.27 00:07:07.769 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:07.769 Nvme1n1p1 : 5.08 1159.56 4.53 0.00 0.00 109964.72 29642.44 82676.18 00:07:07.769 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x0 length 0x4ff7f 00:07:07.769 Nvme1n1p2 : 5.09 1181.31 4.61 0.00 0.00 107558.71 16535.24 83482.78 00:07:07.769 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:07.769 Nvme1n1p2 : 5.08 1159.11 4.53 0.00 0.00 109775.01 30650.68 83079.48 00:07:07.769 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x0 length 0x80000 00:07:07.769 Nvme2n1 : 5.10 1180.61 4.61 0.00 0.00 107424.15 18551.73 80256.39 00:07:07.769 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x80000 length 0x80000 00:07:07.769 Nvme2n1 : 5.08 1158.71 4.53 0.00 0.00 109666.63 29844.09 81062.99 00:07:07.769 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x0 length 0x80000 00:07:07.769 Nvme2n2 : 5.10 1180.28 4.61 0.00 0.00 107279.49 18450.90 83482.78 00:07:07.769 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x80000 length 0x80000 00:07:07.769 Nvme2n2 : 5.08 1158.31 4.52 0.00 0.00 109554.34 27021.00 82676.18 00:07:07.769 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x0 length 0x80000 00:07:07.769 Nvme2n3 : 5.10 1179.95 4.61 0.00 0.00 107057.19 18350.08 85902.57 00:07:07.769 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x80000 length 0x80000 00:07:07.769 Nvme2n3 : 5.08 1157.97 4.52 0.00 0.00 109414.04 20971.52 84289.38 00:07:07.769 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x0 length 0x20000 00:07:07.769 Nvme3n1 : 5.10 1179.62 4.61 0.00 0.00 106951.32 16938.54 89128.96 00:07:07.769 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:07.769 Verification LBA range: start 0x20000 length 0x20000 00:07:07.769 Nvme3n1 : 5.09 1157.55 4.52 0.00 0.00 109328.20 21273.99 87112.47 00:07:07.769 [2024-11-20T12:39:33.288Z] =================================================================================================================== 00:07:07.769 [2024-11-20T12:39:33.288Z] Total : 16377.13 63.97 0.00 0.00 108547.30 16434.41 89128.96 00:07:11.977 00:07:11.977 real 0m10.308s 00:07:11.977 user 0m19.377s 00:07:11.977 sys 0m0.360s 00:07:11.977 12:39:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.977 12:39:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:11.977 ************************************ 00:07:11.977 END TEST bdev_verify 00:07:11.977 ************************************ 00:07:11.977 12:39:37 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:11.977 12:39:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:11.977 12:39:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.977 12:39:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.977 ************************************ 00:07:11.977 START TEST bdev_verify_big_io 00:07:11.977 ************************************ 00:07:11.977 12:39:37 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:11.977 [2024-11-20 12:39:37.457925] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:07:11.977 [2024-11-20 12:39:37.458095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62100 ] 00:07:12.238 [2024-11-20 12:39:37.624437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.500 [2024-11-20 12:39:37.756606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.500 [2024-11-20 12:39:37.756711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.073 Running I/O for 5 seconds... 00:07:18.217 1526.00 IOPS, 95.38 MiB/s [2024-11-20T12:39:44.677Z] 2037.00 IOPS, 127.31 MiB/s [2024-11-20T12:39:44.677Z] 1949.00 IOPS, 121.81 MiB/s [2024-11-20T12:39:45.248Z] 2151.00 IOPS, 134.44 MiB/s 00:07:19.729 Latency(us) 00:07:19.729 [2024-11-20T12:39:45.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.729 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x0 length 0xbd0b 00:07:19.729 Nvme0n1 : 5.80 82.79 5.17 0.00 0.00 1483963.16 19559.98 1703532.70 00:07:19.729 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:19.729 Nvme0n1 : 5.85 109.56 6.85 0.00 0.00 1110667.84 25508.63 1103424.59 00:07:19.729 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x0 length 0x4ff8 00:07:19.729 Nvme1n1p1 : 5.92 82.77 5.17 0.00 0.00 1397677.73 115343.36 1445421.69 00:07:19.729 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:19.729 Nvme1n1p1 : 5.74 111.59 6.97 0.00 0.00 1081648.36 104051.00 955010.76 00:07:19.729 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x0 length 0x4ff7 00:07:19.729 Nvme1n1p2 : 5.92 86.42 5.40 0.00 0.00 1291278.18 119376.34 1226027.32 00:07:19.729 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:19.729 Nvme1n1p2 : 5.86 114.21 7.14 0.00 0.00 1028331.50 117763.15 1122782.92 00:07:19.729 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x0 length 0x8000 00:07:19.729 Nvme2n1 : 6.05 95.19 5.95 0.00 0.00 1125154.57 41338.09 1238932.87 00:07:19.729 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x8000 length 0x8000 00:07:19.729 Nvme2n1 : 5.86 113.31 7.08 0.00 0.00 1002999.63 118569.75 1013085.74 00:07:19.729 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x0 length 0x8000 00:07:19.729 Nvme2n2 : 6.18 112.32 7.02 0.00 0.00 930010.47 26819.35 1477685.56 00:07:19.729 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x8000 length 0x8000 00:07:19.729 Nvme2n2 : 5.98 123.56 7.72 0.00 0.00 902561.63 31457.28 1032444.06 00:07:19.729 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x0 length 0x8000 00:07:19.729 Nvme2n3 : 6.27 127.57 7.97 0.00 0.00 788743.69 9326.28 2477865.75 00:07:19.729 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x8000 length 0x8000 00:07:19.729 Nvme2n3 : 5.98 123.37 7.71 0.00 0.00 879023.97 32062.23 1200216.22 00:07:19.729 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x0 length 0x2000 00:07:19.729 Nvme3n1 : 6.46 230.18 14.39 0.00 0.00 420260.26 267.82 1606741.07 00:07:19.729 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:19.729 Verification LBA range: start 0x2000 length 0x2000 00:07:19.729 Nvme3n1 : 6.00 138.74 8.67 0.00 0.00 770241.17 6856.07 1084066.26 00:07:19.729 [2024-11-20T12:39:45.248Z] =================================================================================================================== 00:07:19.729 [2024-11-20T12:39:45.248Z] Total : 1651.59 103.22 0.00 0.00 935747.19 267.82 2477865.75 00:07:23.937 00:07:23.937 real 0m11.610s 00:07:23.937 user 0m22.010s 00:07:23.937 sys 0m0.378s 00:07:23.937 12:39:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.937 12:39:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:23.937 ************************************ 00:07:23.937 END TEST bdev_verify_big_io 00:07:23.937 ************************************ 00:07:23.937 12:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.937 12:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.937 12:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.937 12:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.937 ************************************ 00:07:23.937 START TEST bdev_write_zeroes 00:07:23.937 ************************************ 00:07:23.937 12:39:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.937 [2024-11-20 12:39:49.099873] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:07:23.937 [2024-11-20 12:39:49.099987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62215 ] 00:07:23.937 [2024-11-20 12:39:49.257359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.937 [2024-11-20 12:39:49.355295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.509 Running I/O for 1 seconds... 00:07:25.449 66752.00 IOPS, 260.75 MiB/s 00:07:25.449 Latency(us) 00:07:25.449 [2024-11-20T12:39:50.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.449 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.449 Nvme0n1 : 1.02 9499.51 37.11 0.00 0.00 13439.81 10939.47 24702.03 00:07:25.449 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.450 Nvme1n1p1 : 1.03 9487.76 37.06 0.00 0.00 13440.54 10737.82 25105.33 00:07:25.450 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.450 Nvme1n1p2 : 1.03 9475.89 37.02 0.00 0.00 13428.78 10838.65 24097.08 00:07:25.450 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.450 Nvme2n1 : 1.03 9465.17 36.97 0.00 0.00 13424.95 11141.12 23895.43 00:07:25.450 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.450 Nvme2n2 : 1.03 9454.51 36.93 0.00 0.00 13378.01 10989.88 23391.31 00:07:25.450 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.450 Nvme2n3 : 1.03 9443.92 36.89 0.00 0.00 13347.07 9326.28 23290.49 00:07:25.450 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.450 Nvme3n1 : 1.03 9433.22 36.85 0.00 0.00 13321.90 7864.32 24702.03 00:07:25.450 [2024-11-20T12:39:50.969Z] =================================================================================================================== 00:07:25.450 [2024-11-20T12:39:50.969Z] Total : 66259.98 258.83 0.00 0.00 13397.29 7864.32 25105.33 00:07:26.394 00:07:26.394 real 0m2.802s 00:07:26.394 user 0m2.506s 00:07:26.394 sys 0m0.182s 00:07:26.394 12:39:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.394 12:39:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:26.394 ************************************ 00:07:26.394 END TEST bdev_write_zeroes 00:07:26.394 ************************************ 00:07:26.394 12:39:51 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.394 12:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.394 12:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.394 12:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.394 ************************************ 00:07:26.394 START TEST bdev_json_nonenclosed 00:07:26.394 ************************************ 00:07:26.394 12:39:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.655 [2024-11-20 12:39:51.946914] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:07:26.655 [2024-11-20 12:39:51.947029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62268 ] 00:07:26.655 [2024-11-20 12:39:52.109153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.917 [2024-11-20 12:39:52.202140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.917 [2024-11-20 12:39:52.202214] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:26.917 [2024-11-20 12:39:52.202231] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.917 [2024-11-20 12:39:52.202240] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.917 00:07:26.917 real 0m0.494s 00:07:26.917 user 0m0.302s 00:07:26.917 sys 0m0.087s 00:07:26.917 12:39:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.917 12:39:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:26.917 ************************************ 00:07:26.917 END TEST bdev_json_nonenclosed 00:07:26.917 ************************************ 00:07:26.917 12:39:52 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.917 12:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.917 12:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.917 12:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.917 ************************************ 00:07:26.917 START TEST bdev_json_nonarray 00:07:26.917 ************************************ 00:07:26.917 12:39:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.179 [2024-11-20 12:39:52.482521] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:07:27.179 [2024-11-20 12:39:52.482634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62298 ] 00:07:27.179 [2024-11-20 12:39:52.642484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.439 [2024-11-20 12:39:52.746132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.439 [2024-11-20 12:39:52.746222] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:27.439 [2024-11-20 12:39:52.746239] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:27.439 [2024-11-20 12:39:52.746248] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.439 00:07:27.439 real 0m0.497s 00:07:27.439 user 0m0.308s 00:07:27.439 sys 0m0.085s 00:07:27.439 12:39:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.439 12:39:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:27.439 ************************************ 00:07:27.439 END TEST bdev_json_nonarray 00:07:27.439 ************************************ 00:07:27.701 12:39:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:27.701 12:39:52 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:27.701 12:39:52 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:27.701 12:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.701 12:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.701 12:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.701 ************************************ 00:07:27.701 START TEST bdev_gpt_uuid 00:07:27.701 ************************************ 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62319 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62319 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62319 ']' 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.701 12:39:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.701 [2024-11-20 12:39:53.042028] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:07:27.701 [2024-11-20 12:39:53.042149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:07:27.701 [2024-11-20 12:39:53.201732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.963 [2024-11-20 12:39:53.301141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.535 12:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.536 12:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:28.536 12:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:28.536 12:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.536 12:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.798 Some configs were skipped because the RPC state that can call them passed over. 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:28.798 { 00:07:28.798 "name": "Nvme1n1p1", 00:07:28.798 "aliases": [ 00:07:28.798 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:28.798 ], 00:07:28.798 "product_name": "GPT Disk", 00:07:28.798 "block_size": 4096, 00:07:28.798 "num_blocks": 655104, 00:07:28.798 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.798 "assigned_rate_limits": { 00:07:28.798 "rw_ios_per_sec": 0, 00:07:28.798 "rw_mbytes_per_sec": 0, 00:07:28.798 "r_mbytes_per_sec": 0, 00:07:28.798 "w_mbytes_per_sec": 0 00:07:28.798 }, 00:07:28.798 "claimed": false, 00:07:28.798 "zoned": false, 00:07:28.798 "supported_io_types": { 00:07:28.798 "read": true, 00:07:28.798 "write": true, 00:07:28.798 "unmap": true, 00:07:28.798 "flush": true, 00:07:28.798 "reset": true, 00:07:28.798 "nvme_admin": false, 00:07:28.798 "nvme_io": false, 00:07:28.798 "nvme_io_md": false, 00:07:28.798 "write_zeroes": true, 00:07:28.798 "zcopy": false, 00:07:28.798 "get_zone_info": false, 00:07:28.798 "zone_management": false, 00:07:28.798 "zone_append": false, 00:07:28.798 "compare": true, 00:07:28.798 "compare_and_write": false, 00:07:28.798 "abort": true, 00:07:28.798 "seek_hole": false, 00:07:28.798 "seek_data": false, 00:07:28.798 "copy": true, 00:07:28.798 "nvme_iov_md": false 00:07:28.798 }, 00:07:28.798 "driver_specific": { 00:07:28.798 "gpt": { 00:07:28.798 "base_bdev": "Nvme1n1", 00:07:28.798 "offset_blocks": 256, 00:07:28.798 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:28.798 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.798 "partition_name": "SPDK_TEST_first" 00:07:28.798 } 00:07:28.798 } 00:07:28.798 } 00:07:28.798 ]' 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:28.798 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:29.060 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:29.060 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:29.060 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.060 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.060 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:29.060 { 00:07:29.060 "name": "Nvme1n1p2", 00:07:29.060 "aliases": [ 00:07:29.060 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:29.060 ], 00:07:29.060 "product_name": "GPT Disk", 00:07:29.060 "block_size": 4096, 00:07:29.060 "num_blocks": 655103, 00:07:29.060 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:29.060 "assigned_rate_limits": { 00:07:29.060 "rw_ios_per_sec": 0, 00:07:29.060 "rw_mbytes_per_sec": 0, 00:07:29.060 "r_mbytes_per_sec": 0, 00:07:29.060 "w_mbytes_per_sec": 0 00:07:29.060 }, 00:07:29.060 "claimed": false, 00:07:29.060 "zoned": false, 00:07:29.060 "supported_io_types": { 00:07:29.060 "read": true, 00:07:29.060 "write": true, 00:07:29.060 "unmap": true, 00:07:29.060 "flush": true, 00:07:29.060 "reset": true, 00:07:29.060 "nvme_admin": false, 00:07:29.060 "nvme_io": false, 00:07:29.060 "nvme_io_md": false, 00:07:29.060 "write_zeroes": true, 00:07:29.060 "zcopy": false, 00:07:29.060 "get_zone_info": false, 00:07:29.060 "zone_management": false, 00:07:29.060 "zone_append": false, 00:07:29.060 "compare": true, 00:07:29.060 "compare_and_write": false, 00:07:29.060 "abort": true, 00:07:29.060 "seek_hole": false, 00:07:29.060 "seek_data": false, 00:07:29.060 "copy": true, 00:07:29.060 "nvme_iov_md": false 00:07:29.060 }, 00:07:29.060 "driver_specific": { 00:07:29.061 "gpt": { 00:07:29.061 "base_bdev": "Nvme1n1", 00:07:29.061 "offset_blocks": 655360, 00:07:29.061 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:29.061 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:29.061 "partition_name": "SPDK_TEST_second" 00:07:29.061 } 00:07:29.061 } 00:07:29.061 } 00:07:29.061 ]' 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62319 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62319 ']' 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62319 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62319 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.061 killing process with pid 62319 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62319' 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62319 00:07:29.061 12:39:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62319 00:07:30.497 00:07:30.497 real 0m2.984s 00:07:30.497 user 0m3.105s 00:07:30.497 sys 0m0.363s 00:07:30.497 12:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.497 12:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.497 ************************************ 00:07:30.497 END TEST bdev_gpt_uuid 00:07:30.497 ************************************ 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:30.497 12:39:55 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:31.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.069 Waiting for block devices as requested 00:07:31.069 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.069 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.329 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.329 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.615 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:36.615 12:40:01 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:36.615 12:40:01 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:36.615 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:36.615 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:36.615 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:36.615 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:36.615 12:40:02 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:36.615 00:07:36.615 real 1m13.484s 00:07:36.615 user 1m35.444s 00:07:36.615 sys 0m11.120s 00:07:36.615 12:40:02 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.615 ************************************ 00:07:36.615 END TEST blockdev_nvme_gpt 00:07:36.615 ************************************ 00:07:36.615 12:40:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.615 12:40:02 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:36.615 12:40:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.615 12:40:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.615 12:40:02 -- common/autotest_common.sh@10 -- # set +x 00:07:36.615 ************************************ 00:07:36.615 START TEST nvme 00:07:36.615 ************************************ 00:07:36.615 12:40:02 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:36.875 * Looking for test storage... 00:07:36.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:36.875 12:40:02 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.875 12:40:02 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.875 12:40:02 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.875 12:40:02 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.875 12:40:02 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.875 12:40:02 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.875 12:40:02 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.876 12:40:02 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.876 12:40:02 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.876 12:40:02 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.876 12:40:02 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.876 12:40:02 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.876 12:40:02 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.876 12:40:02 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.876 12:40:02 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.876 12:40:02 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:36.876 12:40:02 nvme -- scripts/common.sh@345 -- # : 1 00:07:36.876 12:40:02 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.876 12:40:02 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.876 12:40:02 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:36.876 12:40:02 nvme -- scripts/common.sh@353 -- # local d=1 00:07:36.876 12:40:02 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.876 12:40:02 nvme -- scripts/common.sh@355 -- # echo 1 00:07:36.876 12:40:02 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.876 12:40:02 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:36.876 12:40:02 nvme -- scripts/common.sh@353 -- # local d=2 00:07:36.876 12:40:02 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.876 12:40:02 nvme -- scripts/common.sh@355 -- # echo 2 00:07:36.876 12:40:02 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.876 12:40:02 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.876 12:40:02 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.876 12:40:02 nvme -- scripts/common.sh@368 -- # return 0 00:07:36.876 12:40:02 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.876 12:40:02 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.876 --rc genhtml_branch_coverage=1 00:07:36.876 --rc genhtml_function_coverage=1 00:07:36.876 --rc genhtml_legend=1 00:07:36.876 --rc geninfo_all_blocks=1 00:07:36.876 --rc geninfo_unexecuted_blocks=1 00:07:36.876 00:07:36.876 ' 00:07:36.876 12:40:02 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.876 --rc genhtml_branch_coverage=1 00:07:36.876 --rc genhtml_function_coverage=1 00:07:36.876 --rc genhtml_legend=1 00:07:36.876 --rc geninfo_all_blocks=1 00:07:36.876 --rc geninfo_unexecuted_blocks=1 00:07:36.876 00:07:36.876 ' 00:07:36.876 12:40:02 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.876 --rc genhtml_branch_coverage=1 00:07:36.876 --rc genhtml_function_coverage=1 00:07:36.876 --rc genhtml_legend=1 00:07:36.876 --rc geninfo_all_blocks=1 00:07:36.876 --rc geninfo_unexecuted_blocks=1 00:07:36.876 00:07:36.876 ' 00:07:36.876 12:40:02 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.876 --rc genhtml_branch_coverage=1 00:07:36.876 --rc genhtml_function_coverage=1 00:07:36.876 --rc genhtml_legend=1 00:07:36.876 --rc geninfo_all_blocks=1 00:07:36.876 --rc geninfo_unexecuted_blocks=1 00:07:36.876 00:07:36.876 ' 00:07:36.876 12:40:02 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:37.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.736 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.736 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.736 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.736 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.022 12:40:03 nvme -- nvme/nvme.sh@79 -- # uname 00:07:38.022 12:40:03 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:38.022 12:40:03 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:38.022 Waiting for stub to ready for secondary processes... 00:07:38.022 12:40:03 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1075 -- # stubpid=62956 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62956 ]] 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:38.022 12:40:03 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:38.022 [2024-11-20 12:40:03.312159] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:07:38.022 [2024-11-20 12:40:03.312450] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:38.961 [2024-11-20 12:40:04.264222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.961 12:40:04 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:38.961 12:40:04 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62956 ]] 00:07:38.961 12:40:04 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:38.961 [2024-11-20 12:40:04.359086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.961 [2024-11-20 12:40:04.359394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.961 [2024-11-20 12:40:04.359419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.961 [2024-11-20 12:40:04.373174] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:38.961 [2024-11-20 12:40:04.373345] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.961 [2024-11-20 12:40:04.382725] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:38.961 [2024-11-20 12:40:04.382911] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:38.961 [2024-11-20 12:40:04.384371] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.961 [2024-11-20 12:40:04.384626] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:38.961 [2024-11-20 12:40:04.385147] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:38.961 [2024-11-20 12:40:04.387874] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.961 [2024-11-20 12:40:04.388234] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:38.961 [2024-11-20 12:40:04.388413] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:38.961 [2024-11-20 12:40:04.391328] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.961 [2024-11-20 12:40:04.392371] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:38.962 [2024-11-20 12:40:04.392462] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:38.962 [2024-11-20 12:40:04.392516] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:38.962 [2024-11-20 12:40:04.392567] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:39.910 done. 00:07:39.910 12:40:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:39.910 12:40:05 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:39.910 12:40:05 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:39.910 12:40:05 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:39.910 12:40:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.910 12:40:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.910 ************************************ 00:07:39.910 START TEST nvme_reset 00:07:39.910 ************************************ 00:07:39.910 12:40:05 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:40.169 Initializing NVMe Controllers 00:07:40.169 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:40.169 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:40.169 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:40.169 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:40.169 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:40.169 ************************************ 00:07:40.169 END TEST nvme_reset 00:07:40.169 ************************************ 00:07:40.169 00:07:40.169 real 0m0.226s 00:07:40.169 user 0m0.071s 00:07:40.169 sys 0m0.107s 00:07:40.169 12:40:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.169 12:40:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:40.169 12:40:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:40.169 12:40:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.169 12:40:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.169 12:40:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.169 ************************************ 00:07:40.169 START TEST nvme_identify 00:07:40.169 ************************************ 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:40.169 12:40:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:40.169 12:40:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:40.169 12:40:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:40.169 12:40:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:40.169 12:40:05 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:40.169 12:40:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:40.433 [2024-11-20 12:40:05.796922] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62989 terminated unexpected 00:07:40.433 ===================================================== 00:07:40.433 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.433 ===================================================== 00:07:40.433 Controller Capabilities/Features 00:07:40.433 ================================ 00:07:40.433 Vendor ID: 1b36 00:07:40.433 Subsystem Vendor ID: 1af4 00:07:40.433 Serial Number: 12340 00:07:40.433 Model Number: QEMU NVMe Ctrl 00:07:40.433 Firmware Version: 8.0.0 00:07:40.433 Recommended Arb Burst: 6 00:07:40.433 IEEE OUI Identifier: 00 54 52 00:07:40.433 Multi-path I/O 00:07:40.433 May have multiple subsystem ports: No 00:07:40.433 May have multiple controllers: No 00:07:40.433 Associated with SR-IOV VF: No 00:07:40.433 Max Data Transfer Size: 524288 00:07:40.433 Max Number of Namespaces: 256 00:07:40.433 Max Number of I/O Queues: 64 00:07:40.433 NVMe Specification Version (VS): 1.4 00:07:40.433 NVMe Specification Version (Identify): 1.4 00:07:40.433 Maximum Queue Entries: 2048 00:07:40.433 Contiguous Queues Required: Yes 00:07:40.433 Arbitration Mechanisms Supported 00:07:40.433 Weighted Round Robin: Not Supported 00:07:40.433 Vendor Specific: Not Supported 00:07:40.433 Reset Timeout: 7500 ms 00:07:40.433 Doorbell Stride: 4 bytes 00:07:40.433 NVM Subsystem Reset: Not Supported 00:07:40.433 Command Sets Supported 00:07:40.433 NVM Command Set: Supported 00:07:40.433 Boot Partition: Not Supported 00:07:40.433 Memory Page Size Minimum: 4096 bytes 00:07:40.433 Memory Page Size Maximum: 65536 bytes 00:07:40.433 Persistent Memory Region: Not Supported 00:07:40.433 Optional Asynchronous Events Supported 00:07:40.433 Namespace Attribute Notices: Supported 00:07:40.433 Firmware Activation Notices: Not Supported 00:07:40.433 ANA Change Notices: Not Supported 00:07:40.433 PLE Aggregate Log Change Notices: Not Supported 00:07:40.433 LBA Status Info Alert Notices: Not Supported 00:07:40.433 EGE Aggregate Log Change Notices: Not Supported 00:07:40.433 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.433 Zone Descriptor Change Notices: Not Supported 00:07:40.433 Discovery Log Change Notices: Not Supported 00:07:40.433 Controller Attributes 00:07:40.433 128-bit Host Identifier: Not Supported 00:07:40.433 Non-Operational Permissive Mode: Not Supported 00:07:40.433 NVM Sets: Not Supported 00:07:40.433 Read Recovery Levels: Not Supported 00:07:40.433 Endurance Groups: Not Supported 00:07:40.433 Predictable Latency Mode: Not Supported 00:07:40.433 Traffic Based Keep ALive: Not Supported 00:07:40.433 Namespace Granularity: Not Supported 00:07:40.433 SQ Associations: Not Supported 00:07:40.433 UUID List: Not Supported 00:07:40.433 Multi-Domain Subsystem: Not Supported 00:07:40.433 Fixed Capacity Management: Not Supported 00:07:40.433 Variable Capacity Management: Not Supported 00:07:40.433 Delete Endurance Group: Not Supported 00:07:40.433 Delete NVM Set: Not Supported 00:07:40.433 Extended LBA Formats Supported: Supported 00:07:40.433 Flexible Data Placement Supported: Not Supported 00:07:40.433 00:07:40.433 Controller Memory Buffer Support 00:07:40.433 ================================ 00:07:40.433 Supported: No 00:07:40.433 00:07:40.433 Persistent Memory Region Support 00:07:40.433 ================================ 00:07:40.433 Supported: No 00:07:40.433 00:07:40.433 Admin Command Set Attributes 00:07:40.433 ============================ 00:07:40.433 Security Send/Receive: Not Supported 00:07:40.433 Format NVM: Supported 00:07:40.433 Firmware Activate/Download: Not Supported 00:07:40.433 Namespace Management: Supported 00:07:40.433 Device Self-Test: Not Supported 00:07:40.433 Directives: Supported 00:07:40.433 NVMe-MI: Not Supported 00:07:40.433 Virtualization Management: Not Supported 00:07:40.433 Doorbell Buffer Config: Supported 00:07:40.433 Get LBA Status Capability: Not Supported 00:07:40.433 Command & Feature Lockdown Capability: Not Supported 00:07:40.433 Abort Command Limit: 4 00:07:40.433 Async Event Request Limit: 4 00:07:40.433 Number of Firmware Slots: N/A 00:07:40.433 Firmware Slot 1 Read-Only: N/A 00:07:40.433 Firmware Activation Without Reset: N/A 00:07:40.433 Multiple Update Detection Support: N/A 00:07:40.433 Firmware Update Granularity: No Information Provided 00:07:40.433 Per-Namespace SMART Log: Yes 00:07:40.433 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.433 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:40.433 Command Effects Log Page: Supported 00:07:40.433 Get Log Page Extended Data: Supported 00:07:40.433 Telemetry Log Pages: Not Supported 00:07:40.433 Persistent Event Log Pages: Not Supported 00:07:40.433 Supported Log Pages Log Page: May Support 00:07:40.433 Commands Supported & Effects Log Page: Not Supported 00:07:40.433 Feature Identifiers & Effects Log Page:May Support 00:07:40.433 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.433 Data Area 4 for Telemetry Log: Not Supported 00:07:40.433 Error Log Page Entries Supported: 1 00:07:40.433 Keep Alive: Not Supported 00:07:40.433 00:07:40.433 NVM Command Set Attributes 00:07:40.433 ========================== 00:07:40.433 Submission Queue Entry Size 00:07:40.433 Max: 64 00:07:40.433 Min: 64 00:07:40.433 Completion Queue Entry Size 00:07:40.433 Max: 16 00:07:40.433 Min: 16 00:07:40.433 Number of Namespaces: 256 00:07:40.433 Compare Command: Supported 00:07:40.433 Write Uncorrectable Command: Not Supported 00:07:40.433 Dataset Management Command: Supported 00:07:40.433 Write Zeroes Command: Supported 00:07:40.433 Set Features Save Field: Supported 00:07:40.433 Reservations: Not Supported 00:07:40.433 Timestamp: Supported 00:07:40.433 Copy: Supported 00:07:40.433 Volatile Write Cache: Present 00:07:40.433 Atomic Write Unit (Normal): 1 00:07:40.433 Atomic Write Unit (PFail): 1 00:07:40.433 Atomic Compare & Write Unit: 1 00:07:40.433 Fused Compare & Write: Not Supported 00:07:40.433 Scatter-Gather List 00:07:40.433 SGL Command Set: Supported 00:07:40.433 SGL Keyed: Not Supported 00:07:40.433 SGL Bit Bucket Descriptor: Not Supported 00:07:40.433 SGL Metadata Pointer: Not Supported 00:07:40.433 Oversized SGL: Not Supported 00:07:40.433 SGL Metadata Address: Not Supported 00:07:40.433 SGL Offset: Not Supported 00:07:40.433 Transport SGL Data Block: Not Supported 00:07:40.433 Replay Protected Memory Block: Not Supported 00:07:40.433 00:07:40.433 Firmware Slot Information 00:07:40.433 ========================= 00:07:40.433 Active slot: 1 00:07:40.433 Slot 1 Firmware Revision: 1.0 00:07:40.433 00:07:40.433 00:07:40.433 Commands Supported and Effects 00:07:40.433 ============================== 00:07:40.433 Admin Commands 00:07:40.433 -------------- 00:07:40.433 Delete I/O Submission Queue (00h): Supported 00:07:40.433 Create I/O Submission Queue (01h): Supported 00:07:40.433 Get Log Page (02h): Supported 00:07:40.433 Delete I/O Completion Queue (04h): Supported 00:07:40.433 Create I/O Completion Queue (05h): Supported 00:07:40.433 Identify (06h): Supported 00:07:40.433 Abort (08h): Supported 00:07:40.433 Set Features (09h): Supported 00:07:40.433 Get Features (0Ah): Supported 00:07:40.433 Asynchronous Event Request (0Ch): Supported 00:07:40.433 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.433 Directive Send (19h): Supported 00:07:40.433 Directive Receive (1Ah): Supported 00:07:40.433 Virtualization Management (1Ch): Supported 00:07:40.433 Doorbell Buffer Config (7Ch): Supported 00:07:40.433 Format NVM (80h): Supported LBA-Change 00:07:40.433 I/O Commands 00:07:40.433 ------------ 00:07:40.433 Flush (00h): Supported LBA-Change 00:07:40.433 Write (01h): Supported LBA-Change 00:07:40.433 Read (02h): Supported 00:07:40.433 Compare (05h): Supported 00:07:40.433 Write Zeroes (08h): Supported LBA-Change 00:07:40.433 Dataset Management (09h): Supported LBA-Change 00:07:40.433 Unknown (0Ch): Supported 00:07:40.433 Unknown (12h): Supported 00:07:40.433 Copy (19h): Supported LBA-Change 00:07:40.433 Unknown (1Dh): Supported LBA-Change 00:07:40.433 00:07:40.433 Error Log 00:07:40.433 ========= 00:07:40.433 00:07:40.433 Arbitration 00:07:40.433 =========== 00:07:40.433 Arbitration Burst: no limit 00:07:40.433 00:07:40.433 Power Management 00:07:40.433 ================ 00:07:40.433 Number of Power States: 1 00:07:40.433 Current Power State: Power State #0 00:07:40.434 Power State #0: 00:07:40.434 Max Power: 25.00 W 00:07:40.434 Non-Operational State: Operational 00:07:40.434 Entry Latency: 16 microseconds 00:07:40.434 Exit Latency: 4 microseconds 00:07:40.434 Relative Read Throughput: 0 00:07:40.434 Relative Read Latency: 0 00:07:40.434 Relative Write Throughput: 0 00:07:40.434 Relative Write Latency: 0 00:07:40.434 Idle Power: Not Reported 00:07:40.434 Active Power: Not Reported 00:07:40.434 Non-Operational Permissive Mode: Not Supported 00:07:40.434 00:07:40.434 Health Information 00:07:40.434 ================== 00:07:40.434 Critical Warnings: 00:07:40.434 Available Spare Space: OK 00:07:40.434 Temperature: OK 00:07:40.434 Device Reliability: OK 00:07:40.434 Read Only: No 00:07:40.434 Volatile Memory Backup: OK 00:07:40.434 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.434 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.434 Available Spare: 0% 00:07:40.434 Available Spare Threshold: 0% 00:07:40.434 Life Percentage Used: 0% 00:07:40.434 Data Units Read: 623 00:07:40.434 Data Units Written: 551 00:07:40.434 Host Read Commands: 34048 00:07:40.434 Host Write Commands: 33834 00:07:40.434 Controller Busy Time: 0 minutes 00:07:40.434 Power Cycles: 0 00:07:40.434 Power On Hours: 0 hours 00:07:40.434 Unsafe Shutdowns: 0 00:07:40.434 Unrecoverable Media Errors: 0 00:07:40.434 Lifetime Error Log Entries: 0 00:07:40.434 Warning Temperature Time: 0 minutes 00:07:40.434 Critical Temperature Time: 0 minutes 00:07:40.434 00:07:40.434 Number of Queues 00:07:40.434 ================ 00:07:40.434 Number of I/O Submission Queues: 64 00:07:40.434 Number of I/O Completion Queues: 64 00:07:40.434 00:07:40.434 ZNS Specific Controller Data 00:07:40.434 ============================ 00:07:40.434 Zone Append Size Limit: 0 00:07:40.434 00:07:40.434 00:07:40.434 Active Namespaces 00:07:40.434 ================= 00:07:40.434 Namespace ID:1 00:07:40.434 Error Recovery Timeout: Unlimited 00:07:40.434 Command Set Identifier: NVM (00h) 00:07:40.434 Deallocate: Supported 00:07:40.434 Deallocated/Unwritten Error: Supported 00:07:40.434 Deallocated Read Value: All 0x00 00:07:40.434 Deallocate in Write Zeroes: Not Supported 00:07:40.434 Deallocated Guard Field: 0xFFFF 00:07:40.434 Flush: Supported 00:07:40.434 Reservation: Not Supported 00:07:40.434 Metadata Transferred as: Separate Metadata Buffer 00:07:40.434 Namespace Sharing Capabilities: Private 00:07:40.434 Size (in LBAs): 1548666 (5GiB) 00:07:40.434 Capacity (in LBAs): 1548666 (5GiB) 00:07:40.434 Utilization (in LBAs): 1548666 (5GiB) 00:07:40.434 Thin Provisioning: Not Supported 00:07:40.434 Per-NS Atomic Units: No 00:07:40.434 Maximum Single Source Range Length: 128 00:07:40.434 Maximum Copy Length: 128 00:07:40.434 Maximum Source Range Count: 128 00:07:40.434 NGUID/EUI64 Never Reused: No 00:07:40.434 Namespace Write Protected: No 00:07:40.434 Number of LBA Formats: 8 00:07:40.434 Current LBA Format: LBA Format #07 00:07:40.434 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.434 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.434 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.434 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.434 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.434 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.434 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.434 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.434 00:07:40.434 NVM Specific Namespace Data 00:07:40.434 =========================== 00:07:40.434 Logical Block Storage Tag Mask: 0 00:07:40.434 Protection Information Capabilities: 00:07:40.434 16b Guard Protection Information Storage Tag Support: No 00:07:40.434 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.434 Storage Tag Check Read Support: No 00:07:40.434 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.434 ===================================================== 00:07:40.434 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.434 ===================================================== 00:07:40.434 Controller Capabilities/Features 00:07:40.434 ================================ 00:07:40.434 Vendor ID: 1b36 00:07:40.434 Subsystem Vendor ID: 1af4 00:07:40.434 Serial Number: 12341 00:07:40.434 Model Number: QEMU NVMe Ctrl 00:07:40.434 Firmware Version: 8.0.0 00:07:40.434 Recommended Arb Burst: 6 00:07:40.434 IEEE OUI Identifier: 00 54 52 00:07:40.434 Multi-path I/O 00:07:40.434 May have multiple subsystem ports: No 00:07:40.434 May have multiple controllers: No 00:07:40.434 Associated with SR-IOV VF: No 00:07:40.434 Max Data Transfer Size: 524288 00:07:40.434 Max Number of Namespaces: 256 00:07:40.434 Max Number of I/O Queues: 64 00:07:40.434 NVMe Specification Version (VS): 1.4 00:07:40.434 NVMe Specification Version (Identify): 1.4 00:07:40.434 Maximum Queue Entries: 2048 00:07:40.434 Contiguous Queues Required: Yes 00:07:40.434 Arbitration Mechanisms Supported 00:07:40.434 Weighted Round Robin: Not Supported 00:07:40.434 Vendor Specific: Not Supported 00:07:40.434 Reset Timeout: 7500 ms 00:07:40.434 Doorbell Stride: 4 bytes 00:07:40.434 NVM Subsystem Reset: Not Supported 00:07:40.434 Command Sets Supported 00:07:40.434 NVM Command Set: Supported 00:07:40.434 Boot Partition: Not Supported 00:07:40.434 Memory Page Size Minimum: 4096 bytes 00:07:40.434 Memory Page Size Maximum: 65536 bytes 00:07:40.434 Persistent Memory Region: Not Supported 00:07:40.434 Optional Asynchronous Events Supported 00:07:40.434 Namespace Attribute Notices: Supported 00:07:40.434 Firmware Activation Notices: Not Supported 00:07:40.434 ANA Change Notices: Not Supported 00:07:40.434 PLE Aggregate Log Change Notices: Not Supported 00:07:40.434 LBA Status Info Alert Notices: Not Supported 00:07:40.434 EGE Aggregate Log Change Notices: Not Supported 00:07:40.434 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.434 Zone Descriptor Change Notices: Not Supported 00:07:40.434 Discovery Log Change Notices: Not Supported 00:07:40.434 Controller Attributes 00:07:40.434 128-bit Host Identifier: Not Supported 00:07:40.434 Non-Operational Permissive Mode: Not Supported 00:07:40.434 NVM Sets: Not Supported 00:07:40.434 Read Recovery Levels: Not Supported 00:07:40.434 Endurance Groups: Not Supported 00:07:40.434 Predictable Latency Mode: Not Supported 00:07:40.434 Traffic Based Keep ALive: Not Supported 00:07:40.434 Namespace Granularity: Not Supported 00:07:40.434 SQ Associations: Not Supported 00:07:40.434 UUID List: Not Supported 00:07:40.434 Multi-Domain Subsystem: Not Supported 00:07:40.434 Fixed Capacity Management: Not Supported 00:07:40.434 Variable Capacity Management: Not Supported 00:07:40.434 Delete Endurance Group: Not Supported 00:07:40.434 Delete NVM Set: Not Supported 00:07:40.434 Extended LBA Formats Supported: Supported 00:07:40.434 Flexible Data Placement Supported: Not Supported 00:07:40.434 00:07:40.434 Controller Memory Buffer Support 00:07:40.434 ================================ 00:07:40.434 Supported: No 00:07:40.434 00:07:40.434 Persistent Memory Region Support 00:07:40.434 ================================ 00:07:40.434 Supported: No 00:07:40.434 00:07:40.434 Admin Command Set Attributes 00:07:40.434 ============================ 00:07:40.434 Security Send/Receive: Not Supported 00:07:40.434 Format NVM: Supported 00:07:40.434 Firmware Activate/Download: Not Supported 00:07:40.434 Namespace Management: Supported 00:07:40.434 Device Self-Test: Not Supported 00:07:40.434 Directives: Supported 00:07:40.434 NVMe-MI: Not Supported 00:07:40.434 Virtualization Management: Not Supported 00:07:40.434 Doorbell Buffer Config: Supported 00:07:40.434 Get LBA Status Capability: Not Supported 00:07:40.434 Command & Feature Lockdown Capability: Not Supported 00:07:40.434 Abort Command Limit: 4 00:07:40.434 Async Event Request Limit: 4 00:07:40.434 Number of Firmware Slots: N/A 00:07:40.434 Firmware Slot 1 Read-Only: N/A 00:07:40.434 Firmware Activation Without Reset: N/A 00:07:40.434 Multiple Update Detection Support: N/A 00:07:40.434 Firmware Update Granularity: No Information Provided 00:07:40.434 Per-Namespace SMART Log: Yes 00:07:40.434 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.434 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:40.434 Command Effects Log Page: Supported 00:07:40.435 Get Log Page Extended Data: Supported 00:07:40.435 Telemetry Log Pages: Not Supported 00:07:40.435 Persistent Event Log Pages: Not Supported 00:07:40.435 Supported Log Pages Log Page: May Support 00:07:40.435 Commands Supported & Effects Log Page: Not Supported 00:07:40.435 Feature Identifiers & Effects Log Page:May Support 00:07:40.435 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.435 Data Area 4 for Telemetry Log: Not Supported 00:07:40.435 Error Log Page Entries Supported: 1 00:07:40.435 Keep Alive: Not Supported 00:07:40.435 00:07:40.435 NVM Command Set Attributes 00:07:40.435 ========================== 00:07:40.435 Submission Queue Entry Size 00:07:40.435 Max: 64 00:07:40.435 Min: 64 00:07:40.435 Completion Queue Entry Size 00:07:40.435 Max: 16 00:07:40.435 Min: 16 00:07:40.435 Number of Namespaces: 256 00:07:40.435 Compare Command: Supported 00:07:40.435 Write Uncorrectable Command: Not Supported 00:07:40.435 Dataset Management Command: Supported 00:07:40.435 Write Zeroes Command: Supported 00:07:40.435 Set Features Save Field: Supported 00:07:40.435 Reservations: Not Supported 00:07:40.435 Timestamp: Supported 00:07:40.435 Copy: Supported 00:07:40.435 Volatile Write Cache: Present 00:07:40.435 Atomic Write Unit (Normal): 1 00:07:40.435 Atomic Write Unit (PFail): 1 00:07:40.435 Atomic Compare & Write Unit: 1 00:07:40.435 Fused Compare & Write: Not Supported 00:07:40.435 Scatter-Gather List 00:07:40.435 SGL Command Set: Supported 00:07:40.435 SGL Keyed: Not Supported 00:07:40.435 SGL Bit Bucket Descriptor: Not Supported 00:07:40.435 SGL Metadata Pointer: Not Supported 00:07:40.435 Oversized SGL: Not Supported 00:07:40.435 SGL Metadata Address: Not Supported 00:07:40.435 SGL Offset: Not Supported 00:07:40.435 Transport SGL Data Block: Not Supported 00:07:40.435 Replay Protected Memory Block: Not Supported 00:07:40.435 00:07:40.435 Firmware Slot Information 00:07:40.435 ========================= 00:07:40.435 Active slot: 1 00:07:40.435 Slot 1 Firmware Revision: 1.0 00:07:40.435 00:07:40.435 00:07:40.435 Commands Supported and Effects 00:07:40.435 ============================== 00:07:40.435 Admin Commands 00:07:40.435 -------------- 00:07:40.435 Delete I/O Submission Queue (00h): Supported 00:07:40.435 Create I/O Submission Queue (01h): Supported 00:07:40.435 Get Log Page (02h): Supported 00:07:40.435 Delete I/O Completion Queue (04h): Supported 00:07:40.435 Create I/O Completion Queue (05h): Supported 00:07:40.435 Identify (06h): Supported 00:07:40.435 Abort (08h): Supported 00:07:40.435 Set Features (09h): Supported 00:07:40.435 Get Features (0Ah): Supported 00:07:40.435 Asynchronous Event Request (0Ch): Supported 00:07:40.435 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.435 Directive Send (19h): Supported 00:07:40.435 Directive Receive (1Ah): Supported 00:07:40.435 Virtualization Management (1Ch): Supported 00:07:40.435 Doorbell Buffer Config (7Ch): Supported 00:07:40.435 Format NVM (80h): Supported LBA-Change 00:07:40.435 I/O Commands 00:07:40.435 ------------ 00:07:40.435 Flush (00h): Supported LBA-Change 00:07:40.435 Write (01h): Supported LBA-Change 00:07:40.435 Read (02h): Supported 00:07:40.435 Compare (05h): Supported 00:07:40.435 Write Zeroes (08h): Supported LBA-Change 00:07:40.435 Dataset Management (09h): Supported LBA-Change 00:07:40.435 Unknown (0Ch): Supported 00:07:40.435 Unknown (12h): Supported 00:07:40.435 Copy (19h): Supported LBA-Change 00:07:40.435 Unknown (1Dh): Supported LBA-Change 00:07:40.435 00:07:40.435 Error Log 00:07:40.435 ========= 00:07:40.435 00:07:40.435 Arbitration 00:07:40.435 =========== 00:07:40.435 Arbitration Burst: no limit 00:07:40.435 00:07:40.435 Power Management 00:07:40.435 ================ 00:07:40.435 Number of Power States: 1 00:07:40.435 Current Power State: Power State #0 00:07:40.435 Power State #0: 00:07:40.435 Max Power: 25.00 W 00:07:40.435 Non-Operational State: Operational 00:07:40.435 Entry Latency: 16 microseconds 00:07:40.435 Exit Latency: 4 microseconds 00:07:40.435 Relative Read Throughput: 0 00:07:40.435 Relative Read Latency: 0 00:07:40.435 Relative Write Throughput: 0 00:07:40.435 Relative Write Latency: 0 00:07:40.435 Idle Power: Not Reported 00:07:40.435 Active Power: Not Reported 00:07:40.435 Non-Operational Permissive Mode: Not Supported 00:07:40.435 00:07:40.435 Health Information 00:07:40.435 ================== 00:07:40.435 Critical Warnings: 00:07:40.435 Available Spare Space: OK 00:07:40.435 Temperature: OK 00:07:40.435 Device Reliability: OK 00:07:40.435 Read Only: No 00:07:40.435 Volatile Memory Backup: OK 00:07:40.435 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.435 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.435 Available Spare: 0% 00:07:40.435 Available Spare Threshold: 0% 00:07:40.435 Life Percentage Used: 0% 00:07:40.435 Data Units Read: 935 00:07:40.435 Data Units Written: 808 00:07:40.435 Host Read Commands: 48105 00:07:40.435 Host Write Commands: 47002 00:07:40.435 Controller Busy Time: 0 minutes 00:07:40.435 Power Cycles: 0 00:07:40.435 Power On Hours: 0 hours 00:07:40.435 Unsafe Shutdowns: 0 00:07:40.435 Unrecoverable Media Errors: 0 00:07:40.435 Lifetime Error Log Entries: 0 00:07:40.435 Warning Temperature Time: 0 minutes 00:07:40.435 Critical Temperature Time: 0 minutes 00:07:40.435 00:07:40.435 Number of Queues 00:07:40.435 ================ 00:07:40.435 Number of I/O Submission Queues: 64 00:07:40.435 Number of I/O Completion Queues: 64 00:07:40.435 00:07:40.435 ZNS Specific Controller Data 00:07:40.435 ============================ 00:07:40.435 Zone Append Size Limit: 0 00:07:40.435 00:07:40.435 00:07:40.435 Active Namespaces 00:07:40.435 ================= 00:07:40.435 Namespace ID:1 00:07:40.435 Error Recovery Timeout: Unlimited 00:07:40.435 Command Set Identifier: NVM (00h) 00:07:40.435 Deallocate: Supported 00:07:40.435 Deallocated/Unwritten Error: Supported 00:07:40.435 Deallocated Read Value: All 0x00 00:07:40.435 Deallocate in Write Zeroes: Not Supported 00:07:40.435 Deallocated Guard Field: 0xFFFF 00:07:40.435 Flush: Supported 00:07:40.435 Reservation: Not Supported 00:07:40.435 Namespace Sharing Capabilities: Private 00:07:40.435 Size (in LBAs): 1310720 (5GiB) 00:07:40.435 Capacity (in LBAs): 1310720 (5GiB) 00:07:40.435 Utilization (in LBAs): 1310720 (5GiB) 00:07:40.435 Thin Provisioning: Not Supported 00:07:40.435 Per-NS Atomic Units: No 00:07:40.435 Maximum Single Source Range Length: 128 00:07:40.435 Maximum Copy Length: 128 00:07:40.435 Maximum Source Range Count: 128 00:07:40.435 NGUID/EUI64 Never Reused: No 00:07:40.435 Namespace Write Protected: No 00:07:40.435 Number of LBA Formats: 8 00:07:40.435 Current LBA Format: LBA Format #04 00:07:40.435 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.435 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.435 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.435 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.435 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.435 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.435 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.435 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.435 00:07:40.435 NVM Specific Namespace Data 00:07:40.435 =========================== 00:07:40.435 Logical Block Storage Tag Mask: 0 00:07:40.435 Protection Information Capabilities: 00:07:40.435 16b Guard Protection Information Storage Tag Support: No 00:07:40.435 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.435 Storage Tag Check Read Support: No 00:07:40.435 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.435 ===================================================== 00:07:40.435 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.435 ===================================================== 00:07:40.435 Controller Capabilities/Features 00:07:40.435 ================================ 00:07:40.435 Vendor ID: 1b36 00:07:40.435 Subsystem Vendor ID: 1af4 00:07:40.435 Serial Number: 12343 00:07:40.435 Model Number: QEMU NVMe Ctrl 00:07:40.435 Firmware Version: 8.0.0 00:07:40.435 Recommended Arb Burst: 6 00:07:40.435 IEEE OUI Identifier: 00 54 52 00:07:40.436 Multi-path I/O 00:07:40.436 May have multiple subsystem ports: No 00:07:40.436 May have multiple controllers: Yes 00:07:40.436 Associated with SR-IOV VF: No 00:07:40.436 Max Data Transfer Size: 524288 00:07:40.436 Max Number of Namespaces: 256 00:07:40.436 Max Number of I/O Queues: 64 00:07:40.436 NVMe Specification Version (VS): 1.4 00:07:40.436 NVMe Specification Version (Identify): 1.4 00:07:40.436 Maximum Queue Entries: 2048 00:07:40.436 Contiguous Queues Required: Yes 00:07:40.436 Arbitration Mechanisms Supported 00:07:40.436 Weighted Round Robin: Not Supported 00:07:40.436 Vendor Specific: Not Supported 00:07:40.436 Reset Timeout: 7500 ms 00:07:40.436 Doorbell Stride: 4 bytes 00:07:40.436 NVM Subsystem Reset: Not Supported 00:07:40.436 Command Sets Supported 00:07:40.436 NVM Command Set: Supported 00:07:40.436 Boot Partition: Not Supported 00:07:40.436 Memory Page Size Minimum: 4096 bytes 00:07:40.436 Memory Page Size Maximum: 65536 bytes 00:07:40.436 Persistent Memory Region: Not Supported 00:07:40.436 Optional Asynchronous Events Supported 00:07:40.436 Namespace Attribute Notices: Supported 00:07:40.436 Firmware Activation Notices: Not Supported 00:07:40.436 ANA Change Notices: Not Supported 00:07:40.436 PLE Aggregate Log Change Notices: Not Supported 00:07:40.436 LBA Status Info Alert Notices: Not Supported 00:07:40.436 EGE Aggregate Log Change Notices: Not Supported 00:07:40.436 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.436 Zone Descriptor Change Notices: Not Supported 00:07:40.436 Discovery Log Change Notices: Not Supported 00:07:40.436 Controller Attributes 00:07:40.436 128-bit Host Identifier: Not Supported 00:07:40.436 Non-Operational Permissive Mode: Not Supported 00:07:40.436 NVM Sets: Not Supported 00:07:40.436 Read Recovery Levels: Not Supported 00:07:40.436 Endurance Groups: Supported 00:07:40.436 Predictable Latency Mode: Not Supported 00:07:40.436 Traffic Based Keep ALive: Not Supported 00:07:40.436 Namespace Granularity: Not Supported 00:07:40.436 SQ Associations: Not Supported 00:07:40.436 UUID List: Not Supported 00:07:40.436 Multi-Domain Subsystem: Not Supported 00:07:40.436 Fixed Capacity Management: Not Supported 00:07:40.436 Variable Capacity Management: Not Supported 00:07:40.436 Delete Endurance Group: Not Supported 00:07:40.436 Delete NVM Set: Not Supported 00:07:40.436 Extended LBA Formats Supported: Supported 00:07:40.436 Flexible Data Placement Supported: Supported 00:07:40.436 00:07:40.436 Controller Memory Buffer Support 00:07:40.436 ================================ 00:07:40.436 Supported: No 00:07:40.436 00:07:40.436 Persistent Memory Region Support 00:07:40.436 ================================ 00:07:40.436 Supported: No 00:07:40.436 00:07:40.436 Admin Command Set Attributes 00:07:40.436 ============================ 00:07:40.436 Security Send/Receive: Not Supported 00:07:40.436 Format NVM: Supported 00:07:40.436 Firmware Activate/Download: Not Supported 00:07:40.436 Namespace Management: Supported 00:07:40.436 Device Self-Test: Not Supported 00:07:40.436 Directives: Supported 00:07:40.436 NVMe-MI: Not Supported 00:07:40.436 Virtualization Management: Not Supported 00:07:40.436 Doorbell Buffer Config: Supported 00:07:40.436 Get LBA Status Capability: Not Supported 00:07:40.436 Command & Feature Lockdown Capability: Not Supported 00:07:40.436 Abort Command Limit: 4 00:07:40.436 Async Event Request Limit: 4 00:07:40.436 Number of Firmware Slots: N/A 00:07:40.436 Firmware Slot 1 Read-Only: N/A 00:07:40.436 Firmware Activation Without Reset: N/A 00:07:40.436 Multiple Update Detection Support: N/A 00:07:40.436 Firmware Update Granularity: No Information Provided 00:07:40.436 Per-Namespace SMART Log: Yes 00:07:40.436 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.436 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:40.436 Command Effects Log Page: Supported 00:07:40.436 Get Log Page Extended Data: Supported 00:07:40.436 Telemetry Log Pages: Not Supported 00:07:40.436 Persistent Event Log Pages: Not Supported 00:07:40.436 Supported Log Pages Log Page: May Support 00:07:40.436 Commands Supported & Effects Log Page: Not Supported 00:07:40.436 Feature Identifiers & Effects Log Page:May Support 00:07:40.436 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.436 Data Area 4 for Telemetry Log: Not Supported 00:07:40.436 Error Log Page Entries Supported: 1 00:07:40.436 Keep Alive: Not Supported 00:07:40.436 00:07:40.436 NVM Command Set Attributes 00:07:40.436 ========================== 00:07:40.436 Submission Queue Entry Size 00:07:40.436 Max: 64 00:07:40.436 Min: 64 00:07:40.436 Completion Queue Entry Size 00:07:40.436 Max: 16 00:07:40.436 Min: 16 00:07:40.436 Number of Namespaces: 256 00:07:40.436 Compare Command: Supported 00:07:40.436 Write Uncorrectable Command: Not Supported 00:07:40.436 Dataset Management Command: Supported 00:07:40.436 Write Zeroes Command: Supported 00:07:40.436 Set Features Save Field: Supported 00:07:40.436 Reservations: Not Supported 00:07:40.436 Timestamp: Supported 00:07:40.436 Copy: Supported 00:07:40.436 Volatile Write Cache: Present 00:07:40.436 Atomic Write Unit (Normal): 1 00:07:40.436 Atomic Write Unit (PFail): 1 00:07:40.436 Atomic Compare & Write Unit: 1 00:07:40.436 Fused Compare & Write: Not Supported 00:07:40.436 Scatter-Gather List 00:07:40.436 SGL Command Set: Supported 00:07:40.436 SGL Keyed: Not Supported 00:07:40.436 SGL Bit Bucket Descriptor: Not Supported 00:07:40.436 SGL Metadata Pointer: Not Supported 00:07:40.436 Oversized SGL: Not Supported 00:07:40.436 SGL Metadata Address: Not Supported 00:07:40.436 SGL Offset: Not Supported 00:07:40.436 Transport SGL Data Block: Not Supported 00:07:40.436 Replay Protected Memory Block: Not Supported 00:07:40.436 00:07:40.436 Firmware Slot Information 00:07:40.436 ========================= 00:07:40.436 Active slot: 1 00:07:40.436 Slot 1 Firmware Revision: 1.0 00:07:40.436 00:07:40.436 00:07:40.436 Commands Supported and Effects 00:07:40.436 ============================== 00:07:40.436 Admin Commands 00:07:40.436 -------------- 00:07:40.436 Delete I/O Submission Queue (00h): Supported 00:07:40.436 Create I/O Submission Queue (01h): Supported 00:07:40.436 Get Log Page (02h): Supported 00:07:40.436 Delete I/O Completion Queue (04h): Supported 00:07:40.436 Create I/O Completion Queue (05h): Supported 00:07:40.436 Identify (06h): Supported 00:07:40.436 Abort (08h): Supported 00:07:40.436 Set Features (09h): Supported 00:07:40.436 Get Features (0Ah): Supported 00:07:40.436 Asynchronous Event Request (0Ch): Supported 00:07:40.436 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.436 Directive Send (19h): Supported 00:07:40.436 Directive Receive (1Ah): Supported 00:07:40.436 Virtualization Management (1Ch): Supported 00:07:40.436 Doorbell Buffer Config (7Ch): Supported 00:07:40.436 Format NVM (80h): Supported LBA-Change 00:07:40.436 I/O Commands 00:07:40.436 ------------ 00:07:40.436 Flush (00h): Supported LBA-Change 00:07:40.436 Write (01h): Supported LBA-Change 00:07:40.436 Read (02h): Supported 00:07:40.436 Compare (05h): Supported 00:07:40.436 Write Zeroes (08h): Supported LBA-Change 00:07:40.436 Dataset Management (09h): Supported LBA-Change 00:07:40.436 Unknown (0Ch): Supported 00:07:40.436 Unknown (12h): Supported 00:07:40.436 Copy (19h): Supported LBA-Change 00:07:40.436 Unknown (1Dh): Supported LBA-Change 00:07:40.436 00:07:40.436 Error Log 00:07:40.436 ========= 00:07:40.436 00:07:40.436 Arbitration 00:07:40.436 =========== 00:07:40.436 Arbitration Burst: no limit 00:07:40.436 00:07:40.436 Power Management 00:07:40.436 ================ 00:07:40.436 Number of Power States: 1 00:07:40.436 Current Power State: Power State #0 00:07:40.436 Power State #0: 00:07:40.436 Max Power: 25.00 W 00:07:40.436 Non-Operational State: Operational 00:07:40.436 Entry Latency: 16 microseconds 00:07:40.436 Exit Latency: 4 microseconds 00:07:40.436 Relative Read Throughput: 0 00:07:40.436 Relative Read Latency: 0 00:07:40.436 Relative Write Throughput: 0 00:07:40.436 Relative Write Latency: 0 00:07:40.436 Idle Power: Not Reported 00:07:40.436 Active Power: Not Reported 00:07:40.436 Non-Operational Permissive Mode: Not Supported 00:07:40.436 00:07:40.436 Health Information 00:07:40.436 ================== 00:07:40.436 Critical Warnings: 00:07:40.436 Available Spare Space: OK 00:07:40.436 Temperature: OK 00:07:40.436 Device Reliability: OK 00:07:40.436 Read Only: No 00:07:40.437 Volatile Memory Backup: OK 00:07:40.437 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.437 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.437 Available Spare: 0% 00:07:40.437 Available Spare Threshold: 0% 00:07:40.437 Life Percentage Used: [2024-11-20 12:40:05.798436] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62989 terminated unexpected 00:07:40.437 [2024-11-20 12:40:05.799036] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62989 terminated unexpected 00:07:40.437 0% 00:07:40.437 Data Units Read: 851 00:07:40.437 Data Units Written: 780 00:07:40.437 Host Read Commands: 36182 00:07:40.437 Host Write Commands: 35605 00:07:40.437 Controller Busy Time: 0 minutes 00:07:40.437 Power Cycles: 0 00:07:40.437 Power On Hours: 0 hours 00:07:40.437 Unsafe Shutdowns: 0 00:07:40.437 Unrecoverable Media Errors: 0 00:07:40.437 Lifetime Error Log Entries: 0 00:07:40.437 Warning Temperature Time: 0 minutes 00:07:40.437 Critical Temperature Time: 0 minutes 00:07:40.437 00:07:40.437 Number of Queues 00:07:40.437 ================ 00:07:40.437 Number of I/O Submission Queues: 64 00:07:40.437 Number of I/O Completion Queues: 64 00:07:40.437 00:07:40.437 ZNS Specific Controller Data 00:07:40.437 ============================ 00:07:40.437 Zone Append Size Limit: 0 00:07:40.437 00:07:40.437 00:07:40.437 Active Namespaces 00:07:40.437 ================= 00:07:40.437 Namespace ID:1 00:07:40.437 Error Recovery Timeout: Unlimited 00:07:40.437 Command Set Identifier: NVM (00h) 00:07:40.437 Deallocate: Supported 00:07:40.437 Deallocated/Unwritten Error: Supported 00:07:40.437 Deallocated Read Value: All 0x00 00:07:40.437 Deallocate in Write Zeroes: Not Supported 00:07:40.437 Deallocated Guard Field: 0xFFFF 00:07:40.437 Flush: Supported 00:07:40.437 Reservation: Not Supported 00:07:40.437 Namespace Sharing Capabilities: Multiple Controllers 00:07:40.437 Size (in LBAs): 262144 (1GiB) 00:07:40.437 Capacity (in LBAs): 262144 (1GiB) 00:07:40.437 Utilization (in LBAs): 262144 (1GiB) 00:07:40.437 Thin Provisioning: Not Supported 00:07:40.437 Per-NS Atomic Units: No 00:07:40.437 Maximum Single Source Range Length: 128 00:07:40.437 Maximum Copy Length: 128 00:07:40.437 Maximum Source Range Count: 128 00:07:40.437 NGUID/EUI64 Never Reused: No 00:07:40.437 Namespace Write Protected: No 00:07:40.437 Endurance group ID: 1 00:07:40.437 Number of LBA Formats: 8 00:07:40.437 Current LBA Format: LBA Format #04 00:07:40.437 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.437 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.437 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.437 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.437 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.437 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.437 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.437 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.437 00:07:40.437 Get Feature FDP: 00:07:40.437 ================ 00:07:40.437 Enabled: Yes 00:07:40.437 FDP configuration index: 0 00:07:40.437 00:07:40.437 FDP configurations log page 00:07:40.437 =========================== 00:07:40.437 Number of FDP configurations: 1 00:07:40.437 Version: 0 00:07:40.437 Size: 112 00:07:40.437 FDP Configuration Descriptor: 0 00:07:40.437 Descriptor Size: 96 00:07:40.437 Reclaim Group Identifier format: 2 00:07:40.437 FDP Volatile Write Cache: Not Present 00:07:40.437 FDP Configuration: Valid 00:07:40.437 Vendor Specific Size: 0 00:07:40.437 Number of Reclaim Groups: 2 00:07:40.437 Number of Recalim Unit Handles: 8 00:07:40.437 Max Placement Identifiers: 128 00:07:40.437 Number of Namespaces Suppprted: 256 00:07:40.437 Reclaim unit Nominal Size: 6000000 bytes 00:07:40.437 Estimated Reclaim Unit Time Limit: Not Reported 00:07:40.437 RUH Desc #000: RUH Type: Initially Isolated 00:07:40.437 RUH Desc #001: RUH Type: Initially Isolated 00:07:40.437 RUH Desc #002: RUH Type: Initially Isolated 00:07:40.437 RUH Desc #003: RUH Type: Initially Isolated 00:07:40.437 RUH Desc #004: RUH Type: Initially Isolated 00:07:40.437 RUH Desc #005: RUH Type: Initially Isolated 00:07:40.437 RUH Desc #006: RUH Type: Initially Isolated 00:07:40.437 RUH Desc #007: RUH Type: Initially Isolated 00:07:40.437 00:07:40.437 FDP reclaim unit handle usage log page 00:07:40.437 ====================================== 00:07:40.437 Number of Reclaim Unit Handles: 8 00:07:40.437 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:40.437 RUH Usage Desc #001: RUH Attributes: Unused 00:07:40.437 RUH Usage Desc #002: RUH Attributes: Unused 00:07:40.437 RUH Usage Desc #003: RUH Attributes: Unused 00:07:40.437 RUH Usage Desc #004: RUH Attributes: Unused 00:07:40.437 RUH Usage Desc #005: RUH Attributes: Unused 00:07:40.437 RUH Usage Desc #006: RUH Attributes: Unused 00:07:40.437 RUH Usage Desc #007: RUH Attributes: Unused 00:07:40.437 00:07:40.437 FDP statistics log page 00:07:40.437 ======================= 00:07:40.437 Host bytes with metadata written: 460234752 00:07:40.437 Media[2024-11-20 12:40:05.800716] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62989 terminated unexpected 00:07:40.437 bytes with metadata written: 460308480 00:07:40.437 Media bytes erased: 0 00:07:40.437 00:07:40.437 FDP events log page 00:07:40.437 =================== 00:07:40.437 Number of FDP events: 0 00:07:40.437 00:07:40.437 NVM Specific Namespace Data 00:07:40.437 =========================== 00:07:40.437 Logical Block Storage Tag Mask: 0 00:07:40.437 Protection Information Capabilities: 00:07:40.437 16b Guard Protection Information Storage Tag Support: No 00:07:40.437 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.437 Storage Tag Check Read Support: No 00:07:40.437 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.437 ===================================================== 00:07:40.437 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.437 ===================================================== 00:07:40.437 Controller Capabilities/Features 00:07:40.437 ================================ 00:07:40.437 Vendor ID: 1b36 00:07:40.437 Subsystem Vendor ID: 1af4 00:07:40.437 Serial Number: 12342 00:07:40.437 Model Number: QEMU NVMe Ctrl 00:07:40.437 Firmware Version: 8.0.0 00:07:40.437 Recommended Arb Burst: 6 00:07:40.437 IEEE OUI Identifier: 00 54 52 00:07:40.437 Multi-path I/O 00:07:40.437 May have multiple subsystem ports: No 00:07:40.437 May have multiple controllers: No 00:07:40.437 Associated with SR-IOV VF: No 00:07:40.437 Max Data Transfer Size: 524288 00:07:40.437 Max Number of Namespaces: 256 00:07:40.437 Max Number of I/O Queues: 64 00:07:40.437 NVMe Specification Version (VS): 1.4 00:07:40.437 NVMe Specification Version (Identify): 1.4 00:07:40.437 Maximum Queue Entries: 2048 00:07:40.437 Contiguous Queues Required: Yes 00:07:40.437 Arbitration Mechanisms Supported 00:07:40.437 Weighted Round Robin: Not Supported 00:07:40.437 Vendor Specific: Not Supported 00:07:40.437 Reset Timeout: 7500 ms 00:07:40.437 Doorbell Stride: 4 bytes 00:07:40.437 NVM Subsystem Reset: Not Supported 00:07:40.438 Command Sets Supported 00:07:40.438 NVM Command Set: Supported 00:07:40.438 Boot Partition: Not Supported 00:07:40.438 Memory Page Size Minimum: 4096 bytes 00:07:40.438 Memory Page Size Maximum: 65536 bytes 00:07:40.438 Persistent Memory Region: Not Supported 00:07:40.438 Optional Asynchronous Events Supported 00:07:40.438 Namespace Attribute Notices: Supported 00:07:40.438 Firmware Activation Notices: Not Supported 00:07:40.438 ANA Change Notices: Not Supported 00:07:40.438 PLE Aggregate Log Change Notices: Not Supported 00:07:40.438 LBA Status Info Alert Notices: Not Supported 00:07:40.438 EGE Aggregate Log Change Notices: Not Supported 00:07:40.438 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.438 Zone Descriptor Change Notices: Not Supported 00:07:40.438 Discovery Log Change Notices: Not Supported 00:07:40.438 Controller Attributes 00:07:40.438 128-bit Host Identifier: Not Supported 00:07:40.438 Non-Operational Permissive Mode: Not Supported 00:07:40.438 NVM Sets: Not Supported 00:07:40.438 Read Recovery Levels: Not Supported 00:07:40.438 Endurance Groups: Not Supported 00:07:40.438 Predictable Latency Mode: Not Supported 00:07:40.438 Traffic Based Keep ALive: Not Supported 00:07:40.438 Namespace Granularity: Not Supported 00:07:40.438 SQ Associations: Not Supported 00:07:40.438 UUID List: Not Supported 00:07:40.438 Multi-Domain Subsystem: Not Supported 00:07:40.438 Fixed Capacity Management: Not Supported 00:07:40.438 Variable Capacity Management: Not Supported 00:07:40.438 Delete Endurance Group: Not Supported 00:07:40.438 Delete NVM Set: Not Supported 00:07:40.438 Extended LBA Formats Supported: Supported 00:07:40.438 Flexible Data Placement Supported: Not Supported 00:07:40.438 00:07:40.438 Controller Memory Buffer Support 00:07:40.438 ================================ 00:07:40.438 Supported: No 00:07:40.438 00:07:40.438 Persistent Memory Region Support 00:07:40.438 ================================ 00:07:40.438 Supported: No 00:07:40.438 00:07:40.438 Admin Command Set Attributes 00:07:40.438 ============================ 00:07:40.438 Security Send/Receive: Not Supported 00:07:40.438 Format NVM: Supported 00:07:40.438 Firmware Activate/Download: Not Supported 00:07:40.438 Namespace Management: Supported 00:07:40.438 Device Self-Test: Not Supported 00:07:40.438 Directives: Supported 00:07:40.438 NVMe-MI: Not Supported 00:07:40.438 Virtualization Management: Not Supported 00:07:40.438 Doorbell Buffer Config: Supported 00:07:40.438 Get LBA Status Capability: Not Supported 00:07:40.438 Command & Feature Lockdown Capability: Not Supported 00:07:40.438 Abort Command Limit: 4 00:07:40.438 Async Event Request Limit: 4 00:07:40.438 Number of Firmware Slots: N/A 00:07:40.438 Firmware Slot 1 Read-Only: N/A 00:07:40.438 Firmware Activation Without Reset: N/A 00:07:40.438 Multiple Update Detection Support: N/A 00:07:40.438 Firmware Update Granularity: No Information Provided 00:07:40.438 Per-Namespace SMART Log: Yes 00:07:40.438 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.438 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:40.438 Command Effects Log Page: Supported 00:07:40.438 Get Log Page Extended Data: Supported 00:07:40.438 Telemetry Log Pages: Not Supported 00:07:40.438 Persistent Event Log Pages: Not Supported 00:07:40.438 Supported Log Pages Log Page: May Support 00:07:40.438 Commands Supported & Effects Log Page: Not Supported 00:07:40.438 Feature Identifiers & Effects Log Page:May Support 00:07:40.438 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.438 Data Area 4 for Telemetry Log: Not Supported 00:07:40.438 Error Log Page Entries Supported: 1 00:07:40.438 Keep Alive: Not Supported 00:07:40.438 00:07:40.438 NVM Command Set Attributes 00:07:40.438 ========================== 00:07:40.438 Submission Queue Entry Size 00:07:40.438 Max: 64 00:07:40.438 Min: 64 00:07:40.438 Completion Queue Entry Size 00:07:40.438 Max: 16 00:07:40.438 Min: 16 00:07:40.438 Number of Namespaces: 256 00:07:40.438 Compare Command: Supported 00:07:40.438 Write Uncorrectable Command: Not Supported 00:07:40.438 Dataset Management Command: Supported 00:07:40.438 Write Zeroes Command: Supported 00:07:40.438 Set Features Save Field: Supported 00:07:40.438 Reservations: Not Supported 00:07:40.438 Timestamp: Supported 00:07:40.438 Copy: Supported 00:07:40.438 Volatile Write Cache: Present 00:07:40.438 Atomic Write Unit (Normal): 1 00:07:40.438 Atomic Write Unit (PFail): 1 00:07:40.438 Atomic Compare & Write Unit: 1 00:07:40.438 Fused Compare & Write: Not Supported 00:07:40.438 Scatter-Gather List 00:07:40.438 SGL Command Set: Supported 00:07:40.438 SGL Keyed: Not Supported 00:07:40.438 SGL Bit Bucket Descriptor: Not Supported 00:07:40.438 SGL Metadata Pointer: Not Supported 00:07:40.438 Oversized SGL: Not Supported 00:07:40.438 SGL Metadata Address: Not Supported 00:07:40.438 SGL Offset: Not Supported 00:07:40.438 Transport SGL Data Block: Not Supported 00:07:40.438 Replay Protected Memory Block: Not Supported 00:07:40.438 00:07:40.438 Firmware Slot Information 00:07:40.438 ========================= 00:07:40.438 Active slot: 1 00:07:40.438 Slot 1 Firmware Revision: 1.0 00:07:40.438 00:07:40.438 00:07:40.438 Commands Supported and Effects 00:07:40.438 ============================== 00:07:40.438 Admin Commands 00:07:40.438 -------------- 00:07:40.438 Delete I/O Submission Queue (00h): Supported 00:07:40.438 Create I/O Submission Queue (01h): Supported 00:07:40.438 Get Log Page (02h): Supported 00:07:40.438 Delete I/O Completion Queue (04h): Supported 00:07:40.438 Create I/O Completion Queue (05h): Supported 00:07:40.438 Identify (06h): Supported 00:07:40.438 Abort (08h): Supported 00:07:40.438 Set Features (09h): Supported 00:07:40.438 Get Features (0Ah): Supported 00:07:40.438 Asynchronous Event Request (0Ch): Supported 00:07:40.438 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.438 Directive Send (19h): Supported 00:07:40.438 Directive Receive (1Ah): Supported 00:07:40.438 Virtualization Management (1Ch): Supported 00:07:40.438 Doorbell Buffer Config (7Ch): Supported 00:07:40.438 Format NVM (80h): Supported LBA-Change 00:07:40.438 I/O Commands 00:07:40.438 ------------ 00:07:40.438 Flush (00h): Supported LBA-Change 00:07:40.438 Write (01h): Supported LBA-Change 00:07:40.438 Read (02h): Supported 00:07:40.438 Compare (05h): Supported 00:07:40.438 Write Zeroes (08h): Supported LBA-Change 00:07:40.438 Dataset Management (09h): Supported LBA-Change 00:07:40.438 Unknown (0Ch): Supported 00:07:40.438 Unknown (12h): Supported 00:07:40.438 Copy (19h): Supported LBA-Change 00:07:40.438 Unknown (1Dh): Supported LBA-Change 00:07:40.438 00:07:40.438 Error Log 00:07:40.438 ========= 00:07:40.438 00:07:40.438 Arbitration 00:07:40.438 =========== 00:07:40.438 Arbitration Burst: no limit 00:07:40.438 00:07:40.438 Power Management 00:07:40.438 ================ 00:07:40.438 Number of Power States: 1 00:07:40.438 Current Power State: Power State #0 00:07:40.438 Power State #0: 00:07:40.438 Max Power: 25.00 W 00:07:40.438 Non-Operational State: Operational 00:07:40.438 Entry Latency: 16 microseconds 00:07:40.438 Exit Latency: 4 microseconds 00:07:40.438 Relative Read Throughput: 0 00:07:40.438 Relative Read Latency: 0 00:07:40.438 Relative Write Throughput: 0 00:07:40.438 Relative Write Latency: 0 00:07:40.438 Idle Power: Not Reported 00:07:40.438 Active Power: Not Reported 00:07:40.438 Non-Operational Permissive Mode: Not Supported 00:07:40.438 00:07:40.438 Health Information 00:07:40.438 ================== 00:07:40.438 Critical Warnings: 00:07:40.438 Available Spare Space: OK 00:07:40.438 Temperature: OK 00:07:40.438 Device Reliability: OK 00:07:40.438 Read Only: No 00:07:40.438 Volatile Memory Backup: OK 00:07:40.438 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.438 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.438 Available Spare: 0% 00:07:40.438 Available Spare Threshold: 0% 00:07:40.438 Life Percentage Used: 0% 00:07:40.438 Data Units Read: 2048 00:07:40.438 Data Units Written: 1835 00:07:40.438 Host Read Commands: 104437 00:07:40.438 Host Write Commands: 102706 00:07:40.438 Controller Busy Time: 0 minutes 00:07:40.438 Power Cycles: 0 00:07:40.438 Power On Hours: 0 hours 00:07:40.438 Unsafe Shutdowns: 0 00:07:40.438 Unrecoverable Media Errors: 0 00:07:40.438 Lifetime Error Log Entries: 0 00:07:40.438 Warning Temperature Time: 0 minutes 00:07:40.438 Critical Temperature Time: 0 minutes 00:07:40.438 00:07:40.438 Number of Queues 00:07:40.438 ================ 00:07:40.438 Number of I/O Submission Queues: 64 00:07:40.438 Number of I/O Completion Queues: 64 00:07:40.438 00:07:40.439 ZNS Specific Controller Data 00:07:40.439 ============================ 00:07:40.439 Zone Append Size Limit: 0 00:07:40.439 00:07:40.439 00:07:40.439 Active Namespaces 00:07:40.439 ================= 00:07:40.439 Namespace ID:1 00:07:40.439 Error Recovery Timeout: Unlimited 00:07:40.439 Command Set Identifier: NVM (00h) 00:07:40.439 Deallocate: Supported 00:07:40.439 Deallocated/Unwritten Error: Supported 00:07:40.439 Deallocated Read Value: All 0x00 00:07:40.439 Deallocate in Write Zeroes: Not Supported 00:07:40.439 Deallocated Guard Field: 0xFFFF 00:07:40.439 Flush: Supported 00:07:40.439 Reservation: Not Supported 00:07:40.439 Namespace Sharing Capabilities: Private 00:07:40.439 Size (in LBAs): 1048576 (4GiB) 00:07:40.439 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.439 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.439 Thin Provisioning: Not Supported 00:07:40.439 Per-NS Atomic Units: No 00:07:40.439 Maximum Single Source Range Length: 128 00:07:40.439 Maximum Copy Length: 128 00:07:40.439 Maximum Source Range Count: 128 00:07:40.439 NGUID/EUI64 Never Reused: No 00:07:40.439 Namespace Write Protected: No 00:07:40.439 Number of LBA Formats: 8 00:07:40.439 Current LBA Format: LBA Format #04 00:07:40.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.439 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.439 00:07:40.439 NVM Specific Namespace Data 00:07:40.439 =========================== 00:07:40.439 Logical Block Storage Tag Mask: 0 00:07:40.439 Protection Information Capabilities: 00:07:40.439 16b Guard Protection Information Storage Tag Support: No 00:07:40.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.439 Storage Tag Check Read Support: No 00:07:40.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Namespace ID:2 00:07:40.439 Error Recovery Timeout: Unlimited 00:07:40.439 Command Set Identifier: NVM (00h) 00:07:40.439 Deallocate: Supported 00:07:40.439 Deallocated/Unwritten Error: Supported 00:07:40.439 Deallocated Read Value: All 0x00 00:07:40.439 Deallocate in Write Zeroes: Not Supported 00:07:40.439 Deallocated Guard Field: 0xFFFF 00:07:40.439 Flush: Supported 00:07:40.439 Reservation: Not Supported 00:07:40.439 Namespace Sharing Capabilities: Private 00:07:40.439 Size (in LBAs): 1048576 (4GiB) 00:07:40.439 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.439 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.439 Thin Provisioning: Not Supported 00:07:40.439 Per-NS Atomic Units: No 00:07:40.439 Maximum Single Source Range Length: 128 00:07:40.439 Maximum Copy Length: 128 00:07:40.439 Maximum Source Range Count: 128 00:07:40.439 NGUID/EUI64 Never Reused: No 00:07:40.439 Namespace Write Protected: No 00:07:40.439 Number of LBA Formats: 8 00:07:40.439 Current LBA Format: LBA Format #04 00:07:40.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.439 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.439 00:07:40.439 NVM Specific Namespace Data 00:07:40.439 =========================== 00:07:40.439 Logical Block Storage Tag Mask: 0 00:07:40.439 Protection Information Capabilities: 00:07:40.439 16b Guard Protection Information Storage Tag Support: No 00:07:40.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.439 Storage Tag Check Read Support: No 00:07:40.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Namespace ID:3 00:07:40.439 Error Recovery Timeout: Unlimited 00:07:40.439 Command Set Identifier: NVM (00h) 00:07:40.439 Deallocate: Supported 00:07:40.439 Deallocated/Unwritten Error: Supported 00:07:40.439 Deallocated Read Value: All 0x00 00:07:40.439 Deallocate in Write Zeroes: Not Supported 00:07:40.439 Deallocated Guard Field: 0xFFFF 00:07:40.439 Flush: Supported 00:07:40.439 Reservation: Not Supported 00:07:40.439 Namespace Sharing Capabilities: Private 00:07:40.439 Size (in LBAs): 1048576 (4GiB) 00:07:40.439 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.439 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.439 Thin Provisioning: Not Supported 00:07:40.439 Per-NS Atomic Units: No 00:07:40.439 Maximum Single Source Range Length: 128 00:07:40.439 Maximum Copy Length: 128 00:07:40.439 Maximum Source Range Count: 128 00:07:40.439 NGUID/EUI64 Never Reused: No 00:07:40.439 Namespace Write Protected: No 00:07:40.439 Number of LBA Formats: 8 00:07:40.439 Current LBA Format: LBA Format #04 00:07:40.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.439 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.439 00:07:40.439 NVM Specific Namespace Data 00:07:40.439 =========================== 00:07:40.439 Logical Block Storage Tag Mask: 0 00:07:40.439 Protection Information Capabilities: 00:07:40.439 16b Guard Protection Information Storage Tag Support: No 00:07:40.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.439 Storage Tag Check Read Support: No 00:07:40.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.439 12:40:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:40.439 12:40:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:40.701 ===================================================== 00:07:40.701 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.701 ===================================================== 00:07:40.701 Controller Capabilities/Features 00:07:40.701 ================================ 00:07:40.701 Vendor ID: 1b36 00:07:40.701 Subsystem Vendor ID: 1af4 00:07:40.701 Serial Number: 12340 00:07:40.701 Model Number: QEMU NVMe Ctrl 00:07:40.701 Firmware Version: 8.0.0 00:07:40.701 Recommended Arb Burst: 6 00:07:40.701 IEEE OUI Identifier: 00 54 52 00:07:40.701 Multi-path I/O 00:07:40.701 May have multiple subsystem ports: No 00:07:40.701 May have multiple controllers: No 00:07:40.701 Associated with SR-IOV VF: No 00:07:40.701 Max Data Transfer Size: 524288 00:07:40.701 Max Number of Namespaces: 256 00:07:40.701 Max Number of I/O Queues: 64 00:07:40.701 NVMe Specification Version (VS): 1.4 00:07:40.701 NVMe Specification Version (Identify): 1.4 00:07:40.701 Maximum Queue Entries: 2048 00:07:40.701 Contiguous Queues Required: Yes 00:07:40.701 Arbitration Mechanisms Supported 00:07:40.701 Weighted Round Robin: Not Supported 00:07:40.701 Vendor Specific: Not Supported 00:07:40.701 Reset Timeout: 7500 ms 00:07:40.701 Doorbell Stride: 4 bytes 00:07:40.701 NVM Subsystem Reset: Not Supported 00:07:40.701 Command Sets Supported 00:07:40.701 NVM Command Set: Supported 00:07:40.701 Boot Partition: Not Supported 00:07:40.701 Memory Page Size Minimum: 4096 bytes 00:07:40.701 Memory Page Size Maximum: 65536 bytes 00:07:40.701 Persistent Memory Region: Not Supported 00:07:40.701 Optional Asynchronous Events Supported 00:07:40.701 Namespace Attribute Notices: Supported 00:07:40.701 Firmware Activation Notices: Not Supported 00:07:40.701 ANA Change Notices: Not Supported 00:07:40.701 PLE Aggregate Log Change Notices: Not Supported 00:07:40.701 LBA Status Info Alert Notices: Not Supported 00:07:40.701 EGE Aggregate Log Change Notices: Not Supported 00:07:40.701 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.701 Zone Descriptor Change Notices: Not Supported 00:07:40.701 Discovery Log Change Notices: Not Supported 00:07:40.701 Controller Attributes 00:07:40.701 128-bit Host Identifier: Not Supported 00:07:40.701 Non-Operational Permissive Mode: Not Supported 00:07:40.701 NVM Sets: Not Supported 00:07:40.701 Read Recovery Levels: Not Supported 00:07:40.701 Endurance Groups: Not Supported 00:07:40.701 Predictable Latency Mode: Not Supported 00:07:40.701 Traffic Based Keep ALive: Not Supported 00:07:40.701 Namespace Granularity: Not Supported 00:07:40.701 SQ Associations: Not Supported 00:07:40.701 UUID List: Not Supported 00:07:40.701 Multi-Domain Subsystem: Not Supported 00:07:40.701 Fixed Capacity Management: Not Supported 00:07:40.701 Variable Capacity Management: Not Supported 00:07:40.701 Delete Endurance Group: Not Supported 00:07:40.701 Delete NVM Set: Not Supported 00:07:40.701 Extended LBA Formats Supported: Supported 00:07:40.701 Flexible Data Placement Supported: Not Supported 00:07:40.701 00:07:40.701 Controller Memory Buffer Support 00:07:40.701 ================================ 00:07:40.701 Supported: No 00:07:40.701 00:07:40.701 Persistent Memory Region Support 00:07:40.701 ================================ 00:07:40.701 Supported: No 00:07:40.701 00:07:40.701 Admin Command Set Attributes 00:07:40.701 ============================ 00:07:40.701 Security Send/Receive: Not Supported 00:07:40.701 Format NVM: Supported 00:07:40.701 Firmware Activate/Download: Not Supported 00:07:40.701 Namespace Management: Supported 00:07:40.701 Device Self-Test: Not Supported 00:07:40.701 Directives: Supported 00:07:40.701 NVMe-MI: Not Supported 00:07:40.701 Virtualization Management: Not Supported 00:07:40.701 Doorbell Buffer Config: Supported 00:07:40.701 Get LBA Status Capability: Not Supported 00:07:40.701 Command & Feature Lockdown Capability: Not Supported 00:07:40.701 Abort Command Limit: 4 00:07:40.701 Async Event Request Limit: 4 00:07:40.701 Number of Firmware Slots: N/A 00:07:40.701 Firmware Slot 1 Read-Only: N/A 00:07:40.701 Firmware Activation Without Reset: N/A 00:07:40.701 Multiple Update Detection Support: N/A 00:07:40.701 Firmware Update Granularity: No Information Provided 00:07:40.701 Per-Namespace SMART Log: Yes 00:07:40.701 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.701 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:40.701 Command Effects Log Page: Supported 00:07:40.701 Get Log Page Extended Data: Supported 00:07:40.701 Telemetry Log Pages: Not Supported 00:07:40.701 Persistent Event Log Pages: Not Supported 00:07:40.701 Supported Log Pages Log Page: May Support 00:07:40.701 Commands Supported & Effects Log Page: Not Supported 00:07:40.701 Feature Identifiers & Effects Log Page:May Support 00:07:40.701 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.701 Data Area 4 for Telemetry Log: Not Supported 00:07:40.701 Error Log Page Entries Supported: 1 00:07:40.701 Keep Alive: Not Supported 00:07:40.701 00:07:40.701 NVM Command Set Attributes 00:07:40.701 ========================== 00:07:40.702 Submission Queue Entry Size 00:07:40.702 Max: 64 00:07:40.702 Min: 64 00:07:40.702 Completion Queue Entry Size 00:07:40.702 Max: 16 00:07:40.702 Min: 16 00:07:40.702 Number of Namespaces: 256 00:07:40.702 Compare Command: Supported 00:07:40.702 Write Uncorrectable Command: Not Supported 00:07:40.702 Dataset Management Command: Supported 00:07:40.702 Write Zeroes Command: Supported 00:07:40.702 Set Features Save Field: Supported 00:07:40.702 Reservations: Not Supported 00:07:40.702 Timestamp: Supported 00:07:40.702 Copy: Supported 00:07:40.702 Volatile Write Cache: Present 00:07:40.702 Atomic Write Unit (Normal): 1 00:07:40.702 Atomic Write Unit (PFail): 1 00:07:40.702 Atomic Compare & Write Unit: 1 00:07:40.702 Fused Compare & Write: Not Supported 00:07:40.702 Scatter-Gather List 00:07:40.702 SGL Command Set: Supported 00:07:40.702 SGL Keyed: Not Supported 00:07:40.702 SGL Bit Bucket Descriptor: Not Supported 00:07:40.702 SGL Metadata Pointer: Not Supported 00:07:40.702 Oversized SGL: Not Supported 00:07:40.702 SGL Metadata Address: Not Supported 00:07:40.702 SGL Offset: Not Supported 00:07:40.702 Transport SGL Data Block: Not Supported 00:07:40.702 Replay Protected Memory Block: Not Supported 00:07:40.702 00:07:40.702 Firmware Slot Information 00:07:40.702 ========================= 00:07:40.702 Active slot: 1 00:07:40.702 Slot 1 Firmware Revision: 1.0 00:07:40.702 00:07:40.702 00:07:40.702 Commands Supported and Effects 00:07:40.702 ============================== 00:07:40.702 Admin Commands 00:07:40.702 -------------- 00:07:40.702 Delete I/O Submission Queue (00h): Supported 00:07:40.702 Create I/O Submission Queue (01h): Supported 00:07:40.702 Get Log Page (02h): Supported 00:07:40.702 Delete I/O Completion Queue (04h): Supported 00:07:40.702 Create I/O Completion Queue (05h): Supported 00:07:40.702 Identify (06h): Supported 00:07:40.702 Abort (08h): Supported 00:07:40.702 Set Features (09h): Supported 00:07:40.702 Get Features (0Ah): Supported 00:07:40.702 Asynchronous Event Request (0Ch): Supported 00:07:40.702 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.702 Directive Send (19h): Supported 00:07:40.702 Directive Receive (1Ah): Supported 00:07:40.702 Virtualization Management (1Ch): Supported 00:07:40.702 Doorbell Buffer Config (7Ch): Supported 00:07:40.702 Format NVM (80h): Supported LBA-Change 00:07:40.702 I/O Commands 00:07:40.702 ------------ 00:07:40.702 Flush (00h): Supported LBA-Change 00:07:40.702 Write (01h): Supported LBA-Change 00:07:40.702 Read (02h): Supported 00:07:40.702 Compare (05h): Supported 00:07:40.702 Write Zeroes (08h): Supported LBA-Change 00:07:40.702 Dataset Management (09h): Supported LBA-Change 00:07:40.702 Unknown (0Ch): Supported 00:07:40.702 Unknown (12h): Supported 00:07:40.702 Copy (19h): Supported LBA-Change 00:07:40.702 Unknown (1Dh): Supported LBA-Change 00:07:40.702 00:07:40.702 Error Log 00:07:40.702 ========= 00:07:40.702 00:07:40.702 Arbitration 00:07:40.702 =========== 00:07:40.702 Arbitration Burst: no limit 00:07:40.702 00:07:40.702 Power Management 00:07:40.702 ================ 00:07:40.702 Number of Power States: 1 00:07:40.702 Current Power State: Power State #0 00:07:40.702 Power State #0: 00:07:40.702 Max Power: 25.00 W 00:07:40.702 Non-Operational State: Operational 00:07:40.702 Entry Latency: 16 microseconds 00:07:40.702 Exit Latency: 4 microseconds 00:07:40.702 Relative Read Throughput: 0 00:07:40.702 Relative Read Latency: 0 00:07:40.702 Relative Write Throughput: 0 00:07:40.702 Relative Write Latency: 0 00:07:40.702 Idle Power: Not Reported 00:07:40.702 Active Power: Not Reported 00:07:40.702 Non-Operational Permissive Mode: Not Supported 00:07:40.702 00:07:40.702 Health Information 00:07:40.702 ================== 00:07:40.702 Critical Warnings: 00:07:40.702 Available Spare Space: OK 00:07:40.702 Temperature: OK 00:07:40.702 Device Reliability: OK 00:07:40.702 Read Only: No 00:07:40.702 Volatile Memory Backup: OK 00:07:40.702 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.702 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.702 Available Spare: 0% 00:07:40.702 Available Spare Threshold: 0% 00:07:40.702 Life Percentage Used: 0% 00:07:40.702 Data Units Read: 623 00:07:40.702 Data Units Written: 551 00:07:40.702 Host Read Commands: 34048 00:07:40.702 Host Write Commands: 33834 00:07:40.702 Controller Busy Time: 0 minutes 00:07:40.702 Power Cycles: 0 00:07:40.702 Power On Hours: 0 hours 00:07:40.702 Unsafe Shutdowns: 0 00:07:40.702 Unrecoverable Media Errors: 0 00:07:40.702 Lifetime Error Log Entries: 0 00:07:40.702 Warning Temperature Time: 0 minutes 00:07:40.702 Critical Temperature Time: 0 minutes 00:07:40.702 00:07:40.702 Number of Queues 00:07:40.702 ================ 00:07:40.702 Number of I/O Submission Queues: 64 00:07:40.702 Number of I/O Completion Queues: 64 00:07:40.702 00:07:40.702 ZNS Specific Controller Data 00:07:40.702 ============================ 00:07:40.702 Zone Append Size Limit: 0 00:07:40.702 00:07:40.702 00:07:40.702 Active Namespaces 00:07:40.702 ================= 00:07:40.702 Namespace ID:1 00:07:40.702 Error Recovery Timeout: Unlimited 00:07:40.702 Command Set Identifier: NVM (00h) 00:07:40.702 Deallocate: Supported 00:07:40.702 Deallocated/Unwritten Error: Supported 00:07:40.702 Deallocated Read Value: All 0x00 00:07:40.702 Deallocate in Write Zeroes: Not Supported 00:07:40.702 Deallocated Guard Field: 0xFFFF 00:07:40.702 Flush: Supported 00:07:40.702 Reservation: Not Supported 00:07:40.702 Metadata Transferred as: Separate Metadata Buffer 00:07:40.702 Namespace Sharing Capabilities: Private 00:07:40.702 Size (in LBAs): 1548666 (5GiB) 00:07:40.702 Capacity (in LBAs): 1548666 (5GiB) 00:07:40.702 Utilization (in LBAs): 1548666 (5GiB) 00:07:40.702 Thin Provisioning: Not Supported 00:07:40.702 Per-NS Atomic Units: No 00:07:40.702 Maximum Single Source Range Length: 128 00:07:40.702 Maximum Copy Length: 128 00:07:40.702 Maximum Source Range Count: 128 00:07:40.702 NGUID/EUI64 Never Reused: No 00:07:40.702 Namespace Write Protected: No 00:07:40.702 Number of LBA Formats: 8 00:07:40.702 Current LBA Format: LBA Format #07 00:07:40.702 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.702 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.702 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.702 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.702 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.702 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.702 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.703 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.703 00:07:40.703 NVM Specific Namespace Data 00:07:40.703 =========================== 00:07:40.703 Logical Block Storage Tag Mask: 0 00:07:40.703 Protection Information Capabilities: 00:07:40.703 16b Guard Protection Information Storage Tag Support: No 00:07:40.703 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.703 Storage Tag Check Read Support: No 00:07:40.703 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.703 12:40:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:40.703 12:40:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:40.964 ===================================================== 00:07:40.964 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.964 ===================================================== 00:07:40.964 Controller Capabilities/Features 00:07:40.964 ================================ 00:07:40.964 Vendor ID: 1b36 00:07:40.964 Subsystem Vendor ID: 1af4 00:07:40.964 Serial Number: 12341 00:07:40.964 Model Number: QEMU NVMe Ctrl 00:07:40.964 Firmware Version: 8.0.0 00:07:40.964 Recommended Arb Burst: 6 00:07:40.964 IEEE OUI Identifier: 00 54 52 00:07:40.964 Multi-path I/O 00:07:40.964 May have multiple subsystem ports: No 00:07:40.964 May have multiple controllers: No 00:07:40.964 Associated with SR-IOV VF: No 00:07:40.964 Max Data Transfer Size: 524288 00:07:40.964 Max Number of Namespaces: 256 00:07:40.964 Max Number of I/O Queues: 64 00:07:40.964 NVMe Specification Version (VS): 1.4 00:07:40.964 NVMe Specification Version (Identify): 1.4 00:07:40.964 Maximum Queue Entries: 2048 00:07:40.964 Contiguous Queues Required: Yes 00:07:40.964 Arbitration Mechanisms Supported 00:07:40.964 Weighted Round Robin: Not Supported 00:07:40.964 Vendor Specific: Not Supported 00:07:40.964 Reset Timeout: 7500 ms 00:07:40.964 Doorbell Stride: 4 bytes 00:07:40.964 NVM Subsystem Reset: Not Supported 00:07:40.964 Command Sets Supported 00:07:40.964 NVM Command Set: Supported 00:07:40.964 Boot Partition: Not Supported 00:07:40.964 Memory Page Size Minimum: 4096 bytes 00:07:40.964 Memory Page Size Maximum: 65536 bytes 00:07:40.964 Persistent Memory Region: Not Supported 00:07:40.964 Optional Asynchronous Events Supported 00:07:40.964 Namespace Attribute Notices: Supported 00:07:40.964 Firmware Activation Notices: Not Supported 00:07:40.964 ANA Change Notices: Not Supported 00:07:40.964 PLE Aggregate Log Change Notices: Not Supported 00:07:40.964 LBA Status Info Alert Notices: Not Supported 00:07:40.964 EGE Aggregate Log Change Notices: Not Supported 00:07:40.964 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.964 Zone Descriptor Change Notices: Not Supported 00:07:40.964 Discovery Log Change Notices: Not Supported 00:07:40.964 Controller Attributes 00:07:40.964 128-bit Host Identifier: Not Supported 00:07:40.964 Non-Operational Permissive Mode: Not Supported 00:07:40.964 NVM Sets: Not Supported 00:07:40.964 Read Recovery Levels: Not Supported 00:07:40.964 Endurance Groups: Not Supported 00:07:40.964 Predictable Latency Mode: Not Supported 00:07:40.964 Traffic Based Keep ALive: Not Supported 00:07:40.964 Namespace Granularity: Not Supported 00:07:40.964 SQ Associations: Not Supported 00:07:40.964 UUID List: Not Supported 00:07:40.964 Multi-Domain Subsystem: Not Supported 00:07:40.964 Fixed Capacity Management: Not Supported 00:07:40.964 Variable Capacity Management: Not Supported 00:07:40.964 Delete Endurance Group: Not Supported 00:07:40.964 Delete NVM Set: Not Supported 00:07:40.964 Extended LBA Formats Supported: Supported 00:07:40.964 Flexible Data Placement Supported: Not Supported 00:07:40.964 00:07:40.964 Controller Memory Buffer Support 00:07:40.964 ================================ 00:07:40.964 Supported: No 00:07:40.964 00:07:40.964 Persistent Memory Region Support 00:07:40.964 ================================ 00:07:40.964 Supported: No 00:07:40.964 00:07:40.964 Admin Command Set Attributes 00:07:40.964 ============================ 00:07:40.964 Security Send/Receive: Not Supported 00:07:40.964 Format NVM: Supported 00:07:40.964 Firmware Activate/Download: Not Supported 00:07:40.964 Namespace Management: Supported 00:07:40.964 Device Self-Test: Not Supported 00:07:40.964 Directives: Supported 00:07:40.964 NVMe-MI: Not Supported 00:07:40.964 Virtualization Management: Not Supported 00:07:40.964 Doorbell Buffer Config: Supported 00:07:40.964 Get LBA Status Capability: Not Supported 00:07:40.964 Command & Feature Lockdown Capability: Not Supported 00:07:40.964 Abort Command Limit: 4 00:07:40.964 Async Event Request Limit: 4 00:07:40.964 Number of Firmware Slots: N/A 00:07:40.964 Firmware Slot 1 Read-Only: N/A 00:07:40.964 Firmware Activation Without Reset: N/A 00:07:40.964 Multiple Update Detection Support: N/A 00:07:40.964 Firmware Update Granularity: No Information Provided 00:07:40.964 Per-Namespace SMART Log: Yes 00:07:40.964 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.964 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:40.964 Command Effects Log Page: Supported 00:07:40.964 Get Log Page Extended Data: Supported 00:07:40.964 Telemetry Log Pages: Not Supported 00:07:40.964 Persistent Event Log Pages: Not Supported 00:07:40.964 Supported Log Pages Log Page: May Support 00:07:40.965 Commands Supported & Effects Log Page: Not Supported 00:07:40.965 Feature Identifiers & Effects Log Page:May Support 00:07:40.965 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.965 Data Area 4 for Telemetry Log: Not Supported 00:07:40.965 Error Log Page Entries Supported: 1 00:07:40.965 Keep Alive: Not Supported 00:07:40.965 00:07:40.965 NVM Command Set Attributes 00:07:40.965 ========================== 00:07:40.965 Submission Queue Entry Size 00:07:40.965 Max: 64 00:07:40.965 Min: 64 00:07:40.965 Completion Queue Entry Size 00:07:40.965 Max: 16 00:07:40.965 Min: 16 00:07:40.965 Number of Namespaces: 256 00:07:40.965 Compare Command: Supported 00:07:40.965 Write Uncorrectable Command: Not Supported 00:07:40.965 Dataset Management Command: Supported 00:07:40.965 Write Zeroes Command: Supported 00:07:40.965 Set Features Save Field: Supported 00:07:40.965 Reservations: Not Supported 00:07:40.965 Timestamp: Supported 00:07:40.965 Copy: Supported 00:07:40.965 Volatile Write Cache: Present 00:07:40.965 Atomic Write Unit (Normal): 1 00:07:40.965 Atomic Write Unit (PFail): 1 00:07:40.965 Atomic Compare & Write Unit: 1 00:07:40.965 Fused Compare & Write: Not Supported 00:07:40.965 Scatter-Gather List 00:07:40.965 SGL Command Set: Supported 00:07:40.965 SGL Keyed: Not Supported 00:07:40.965 SGL Bit Bucket Descriptor: Not Supported 00:07:40.965 SGL Metadata Pointer: Not Supported 00:07:40.965 Oversized SGL: Not Supported 00:07:40.965 SGL Metadata Address: Not Supported 00:07:40.965 SGL Offset: Not Supported 00:07:40.965 Transport SGL Data Block: Not Supported 00:07:40.965 Replay Protected Memory Block: Not Supported 00:07:40.965 00:07:40.965 Firmware Slot Information 00:07:40.965 ========================= 00:07:40.965 Active slot: 1 00:07:40.965 Slot 1 Firmware Revision: 1.0 00:07:40.965 00:07:40.965 00:07:40.965 Commands Supported and Effects 00:07:40.965 ============================== 00:07:40.965 Admin Commands 00:07:40.965 -------------- 00:07:40.965 Delete I/O Submission Queue (00h): Supported 00:07:40.965 Create I/O Submission Queue (01h): Supported 00:07:40.965 Get Log Page (02h): Supported 00:07:40.965 Delete I/O Completion Queue (04h): Supported 00:07:40.965 Create I/O Completion Queue (05h): Supported 00:07:40.965 Identify (06h): Supported 00:07:40.965 Abort (08h): Supported 00:07:40.965 Set Features (09h): Supported 00:07:40.965 Get Features (0Ah): Supported 00:07:40.965 Asynchronous Event Request (0Ch): Supported 00:07:40.965 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.965 Directive Send (19h): Supported 00:07:40.965 Directive Receive (1Ah): Supported 00:07:40.965 Virtualization Management (1Ch): Supported 00:07:40.965 Doorbell Buffer Config (7Ch): Supported 00:07:40.965 Format NVM (80h): Supported LBA-Change 00:07:40.965 I/O Commands 00:07:40.965 ------------ 00:07:40.965 Flush (00h): Supported LBA-Change 00:07:40.965 Write (01h): Supported LBA-Change 00:07:40.965 Read (02h): Supported 00:07:40.965 Compare (05h): Supported 00:07:40.965 Write Zeroes (08h): Supported LBA-Change 00:07:40.965 Dataset Management (09h): Supported LBA-Change 00:07:40.965 Unknown (0Ch): Supported 00:07:40.965 Unknown (12h): Supported 00:07:40.965 Copy (19h): Supported LBA-Change 00:07:40.965 Unknown (1Dh): Supported LBA-Change 00:07:40.965 00:07:40.965 Error Log 00:07:40.965 ========= 00:07:40.965 00:07:40.965 Arbitration 00:07:40.965 =========== 00:07:40.965 Arbitration Burst: no limit 00:07:40.965 00:07:40.965 Power Management 00:07:40.965 ================ 00:07:40.965 Number of Power States: 1 00:07:40.965 Current Power State: Power State #0 00:07:40.965 Power State #0: 00:07:40.965 Max Power: 25.00 W 00:07:40.965 Non-Operational State: Operational 00:07:40.965 Entry Latency: 16 microseconds 00:07:40.965 Exit Latency: 4 microseconds 00:07:40.965 Relative Read Throughput: 0 00:07:40.965 Relative Read Latency: 0 00:07:40.965 Relative Write Throughput: 0 00:07:40.965 Relative Write Latency: 0 00:07:40.965 Idle Power: Not Reported 00:07:40.965 Active Power: Not Reported 00:07:40.965 Non-Operational Permissive Mode: Not Supported 00:07:40.965 00:07:40.965 Health Information 00:07:40.965 ================== 00:07:40.965 Critical Warnings: 00:07:40.965 Available Spare Space: OK 00:07:40.965 Temperature: OK 00:07:40.965 Device Reliability: OK 00:07:40.965 Read Only: No 00:07:40.965 Volatile Memory Backup: OK 00:07:40.965 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.965 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.965 Available Spare: 0% 00:07:40.965 Available Spare Threshold: 0% 00:07:40.965 Life Percentage Used: 0% 00:07:40.965 Data Units Read: 935 00:07:40.965 Data Units Written: 808 00:07:40.965 Host Read Commands: 48105 00:07:40.965 Host Write Commands: 47002 00:07:40.965 Controller Busy Time: 0 minutes 00:07:40.965 Power Cycles: 0 00:07:40.965 Power On Hours: 0 hours 00:07:40.965 Unsafe Shutdowns: 0 00:07:40.965 Unrecoverable Media Errors: 0 00:07:40.965 Lifetime Error Log Entries: 0 00:07:40.965 Warning Temperature Time: 0 minutes 00:07:40.965 Critical Temperature Time: 0 minutes 00:07:40.965 00:07:40.965 Number of Queues 00:07:40.965 ================ 00:07:40.965 Number of I/O Submission Queues: 64 00:07:40.965 Number of I/O Completion Queues: 64 00:07:40.965 00:07:40.965 ZNS Specific Controller Data 00:07:40.965 ============================ 00:07:40.965 Zone Append Size Limit: 0 00:07:40.965 00:07:40.965 00:07:40.965 Active Namespaces 00:07:40.965 ================= 00:07:40.965 Namespace ID:1 00:07:40.965 Error Recovery Timeout: Unlimited 00:07:40.965 Command Set Identifier: NVM (00h) 00:07:40.965 Deallocate: Supported 00:07:40.965 Deallocated/Unwritten Error: Supported 00:07:40.965 Deallocated Read Value: All 0x00 00:07:40.965 Deallocate in Write Zeroes: Not Supported 00:07:40.965 Deallocated Guard Field: 0xFFFF 00:07:40.965 Flush: Supported 00:07:40.965 Reservation: Not Supported 00:07:40.965 Namespace Sharing Capabilities: Private 00:07:40.965 Size (in LBAs): 1310720 (5GiB) 00:07:40.965 Capacity (in LBAs): 1310720 (5GiB) 00:07:40.965 Utilization (in LBAs): 1310720 (5GiB) 00:07:40.965 Thin Provisioning: Not Supported 00:07:40.965 Per-NS Atomic Units: No 00:07:40.965 Maximum Single Source Range Length: 128 00:07:40.965 Maximum Copy Length: 128 00:07:40.965 Maximum Source Range Count: 128 00:07:40.965 NGUID/EUI64 Never Reused: No 00:07:40.965 Namespace Write Protected: No 00:07:40.965 Number of LBA Formats: 8 00:07:40.965 Current LBA Format: LBA Format #04 00:07:40.965 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.965 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.965 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.965 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.965 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.965 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.965 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.965 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.965 00:07:40.965 NVM Specific Namespace Data 00:07:40.965 =========================== 00:07:40.965 Logical Block Storage Tag Mask: 0 00:07:40.965 Protection Information Capabilities: 00:07:40.965 16b Guard Protection Information Storage Tag Support: No 00:07:40.965 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.965 Storage Tag Check Read Support: No 00:07:40.965 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.965 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.965 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.965 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.965 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.965 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.965 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.965 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.966 12:40:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:40.966 12:40:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:40.966 ===================================================== 00:07:40.966 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.966 ===================================================== 00:07:40.966 Controller Capabilities/Features 00:07:40.966 ================================ 00:07:40.966 Vendor ID: 1b36 00:07:40.966 Subsystem Vendor ID: 1af4 00:07:40.966 Serial Number: 12342 00:07:40.966 Model Number: QEMU NVMe Ctrl 00:07:40.966 Firmware Version: 8.0.0 00:07:40.966 Recommended Arb Burst: 6 00:07:40.966 IEEE OUI Identifier: 00 54 52 00:07:40.966 Multi-path I/O 00:07:40.966 May have multiple subsystem ports: No 00:07:40.966 May have multiple controllers: No 00:07:40.966 Associated with SR-IOV VF: No 00:07:40.966 Max Data Transfer Size: 524288 00:07:40.966 Max Number of Namespaces: 256 00:07:40.966 Max Number of I/O Queues: 64 00:07:40.966 NVMe Specification Version (VS): 1.4 00:07:40.966 NVMe Specification Version (Identify): 1.4 00:07:40.966 Maximum Queue Entries: 2048 00:07:40.966 Contiguous Queues Required: Yes 00:07:40.966 Arbitration Mechanisms Supported 00:07:40.966 Weighted Round Robin: Not Supported 00:07:40.966 Vendor Specific: Not Supported 00:07:40.966 Reset Timeout: 7500 ms 00:07:40.966 Doorbell Stride: 4 bytes 00:07:40.966 NVM Subsystem Reset: Not Supported 00:07:40.966 Command Sets Supported 00:07:40.966 NVM Command Set: Supported 00:07:40.966 Boot Partition: Not Supported 00:07:40.966 Memory Page Size Minimum: 4096 bytes 00:07:40.966 Memory Page Size Maximum: 65536 bytes 00:07:40.966 Persistent Memory Region: Not Supported 00:07:40.966 Optional Asynchronous Events Supported 00:07:40.966 Namespace Attribute Notices: Supported 00:07:40.966 Firmware Activation Notices: Not Supported 00:07:40.966 ANA Change Notices: Not Supported 00:07:40.966 PLE Aggregate Log Change Notices: Not Supported 00:07:40.966 LBA Status Info Alert Notices: Not Supported 00:07:40.966 EGE Aggregate Log Change Notices: Not Supported 00:07:40.966 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.966 Zone Descriptor Change Notices: Not Supported 00:07:40.966 Discovery Log Change Notices: Not Supported 00:07:40.966 Controller Attributes 00:07:40.966 128-bit Host Identifier: Not Supported 00:07:40.966 Non-Operational Permissive Mode: Not Supported 00:07:40.966 NVM Sets: Not Supported 00:07:40.966 Read Recovery Levels: Not Supported 00:07:40.966 Endurance Groups: Not Supported 00:07:40.966 Predictable Latency Mode: Not Supported 00:07:40.966 Traffic Based Keep ALive: Not Supported 00:07:40.966 Namespace Granularity: Not Supported 00:07:40.966 SQ Associations: Not Supported 00:07:40.966 UUID List: Not Supported 00:07:40.966 Multi-Domain Subsystem: Not Supported 00:07:40.966 Fixed Capacity Management: Not Supported 00:07:40.966 Variable Capacity Management: Not Supported 00:07:40.966 Delete Endurance Group: Not Supported 00:07:40.966 Delete NVM Set: Not Supported 00:07:40.966 Extended LBA Formats Supported: Supported 00:07:40.966 Flexible Data Placement Supported: Not Supported 00:07:40.966 00:07:40.966 Controller Memory Buffer Support 00:07:40.966 ================================ 00:07:40.966 Supported: No 00:07:40.966 00:07:40.966 Persistent Memory Region Support 00:07:40.966 ================================ 00:07:40.966 Supported: No 00:07:40.966 00:07:40.966 Admin Command Set Attributes 00:07:40.966 ============================ 00:07:40.966 Security Send/Receive: Not Supported 00:07:40.966 Format NVM: Supported 00:07:40.966 Firmware Activate/Download: Not Supported 00:07:40.966 Namespace Management: Supported 00:07:40.966 Device Self-Test: Not Supported 00:07:40.966 Directives: Supported 00:07:40.966 NVMe-MI: Not Supported 00:07:40.966 Virtualization Management: Not Supported 00:07:40.966 Doorbell Buffer Config: Supported 00:07:40.966 Get LBA Status Capability: Not Supported 00:07:40.966 Command & Feature Lockdown Capability: Not Supported 00:07:40.966 Abort Command Limit: 4 00:07:40.966 Async Event Request Limit: 4 00:07:40.966 Number of Firmware Slots: N/A 00:07:40.966 Firmware Slot 1 Read-Only: N/A 00:07:40.966 Firmware Activation Without Reset: N/A 00:07:40.966 Multiple Update Detection Support: N/A 00:07:40.966 Firmware Update Granularity: No Information Provided 00:07:40.966 Per-Namespace SMART Log: Yes 00:07:40.966 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.966 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:40.966 Command Effects Log Page: Supported 00:07:40.966 Get Log Page Extended Data: Supported 00:07:40.966 Telemetry Log Pages: Not Supported 00:07:40.966 Persistent Event Log Pages: Not Supported 00:07:40.966 Supported Log Pages Log Page: May Support 00:07:40.966 Commands Supported & Effects Log Page: Not Supported 00:07:40.966 Feature Identifiers & Effects Log Page:May Support 00:07:40.966 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.966 Data Area 4 for Telemetry Log: Not Supported 00:07:40.966 Error Log Page Entries Supported: 1 00:07:40.966 Keep Alive: Not Supported 00:07:40.966 00:07:40.966 NVM Command Set Attributes 00:07:40.966 ========================== 00:07:40.966 Submission Queue Entry Size 00:07:40.966 Max: 64 00:07:40.966 Min: 64 00:07:40.966 Completion Queue Entry Size 00:07:40.966 Max: 16 00:07:40.966 Min: 16 00:07:40.966 Number of Namespaces: 256 00:07:40.966 Compare Command: Supported 00:07:40.966 Write Uncorrectable Command: Not Supported 00:07:40.966 Dataset Management Command: Supported 00:07:40.966 Write Zeroes Command: Supported 00:07:40.966 Set Features Save Field: Supported 00:07:40.966 Reservations: Not Supported 00:07:40.966 Timestamp: Supported 00:07:40.966 Copy: Supported 00:07:40.966 Volatile Write Cache: Present 00:07:40.966 Atomic Write Unit (Normal): 1 00:07:40.966 Atomic Write Unit (PFail): 1 00:07:40.966 Atomic Compare & Write Unit: 1 00:07:40.966 Fused Compare & Write: Not Supported 00:07:40.966 Scatter-Gather List 00:07:40.966 SGL Command Set: Supported 00:07:40.966 SGL Keyed: Not Supported 00:07:40.966 SGL Bit Bucket Descriptor: Not Supported 00:07:40.966 SGL Metadata Pointer: Not Supported 00:07:40.966 Oversized SGL: Not Supported 00:07:40.966 SGL Metadata Address: Not Supported 00:07:40.966 SGL Offset: Not Supported 00:07:40.966 Transport SGL Data Block: Not Supported 00:07:40.966 Replay Protected Memory Block: Not Supported 00:07:40.966 00:07:40.966 Firmware Slot Information 00:07:40.966 ========================= 00:07:40.966 Active slot: 1 00:07:40.966 Slot 1 Firmware Revision: 1.0 00:07:40.966 00:07:40.966 00:07:40.966 Commands Supported and Effects 00:07:40.966 ============================== 00:07:40.966 Admin Commands 00:07:40.966 -------------- 00:07:40.966 Delete I/O Submission Queue (00h): Supported 00:07:40.966 Create I/O Submission Queue (01h): Supported 00:07:40.966 Get Log Page (02h): Supported 00:07:40.966 Delete I/O Completion Queue (04h): Supported 00:07:40.966 Create I/O Completion Queue (05h): Supported 00:07:40.966 Identify (06h): Supported 00:07:40.966 Abort (08h): Supported 00:07:40.966 Set Features (09h): Supported 00:07:40.966 Get Features (0Ah): Supported 00:07:40.966 Asynchronous Event Request (0Ch): Supported 00:07:40.966 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.966 Directive Send (19h): Supported 00:07:40.966 Directive Receive (1Ah): Supported 00:07:40.966 Virtualization Management (1Ch): Supported 00:07:40.966 Doorbell Buffer Config (7Ch): Supported 00:07:40.966 Format NVM (80h): Supported LBA-Change 00:07:40.966 I/O Commands 00:07:40.966 ------------ 00:07:40.966 Flush (00h): Supported LBA-Change 00:07:40.966 Write (01h): Supported LBA-Change 00:07:40.966 Read (02h): Supported 00:07:40.966 Compare (05h): Supported 00:07:40.967 Write Zeroes (08h): Supported LBA-Change 00:07:40.967 Dataset Management (09h): Supported LBA-Change 00:07:40.967 Unknown (0Ch): Supported 00:07:40.967 Unknown (12h): Supported 00:07:40.967 Copy (19h): Supported LBA-Change 00:07:40.967 Unknown (1Dh): Supported LBA-Change 00:07:40.967 00:07:40.967 Error Log 00:07:40.967 ========= 00:07:40.967 00:07:40.967 Arbitration 00:07:40.967 =========== 00:07:40.967 Arbitration Burst: no limit 00:07:40.967 00:07:40.967 Power Management 00:07:40.967 ================ 00:07:40.967 Number of Power States: 1 00:07:40.967 Current Power State: Power State #0 00:07:40.967 Power State #0: 00:07:40.967 Max Power: 25.00 W 00:07:40.967 Non-Operational State: Operational 00:07:40.967 Entry Latency: 16 microseconds 00:07:40.967 Exit Latency: 4 microseconds 00:07:40.967 Relative Read Throughput: 0 00:07:40.967 Relative Read Latency: 0 00:07:40.967 Relative Write Throughput: 0 00:07:40.967 Relative Write Latency: 0 00:07:40.967 Idle Power: Not Reported 00:07:40.967 Active Power: Not Reported 00:07:40.967 Non-Operational Permissive Mode: Not Supported 00:07:40.967 00:07:40.967 Health Information 00:07:40.967 ================== 00:07:40.967 Critical Warnings: 00:07:40.967 Available Spare Space: OK 00:07:40.967 Temperature: OK 00:07:40.967 Device Reliability: OK 00:07:40.967 Read Only: No 00:07:40.967 Volatile Memory Backup: OK 00:07:40.967 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.967 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.967 Available Spare: 0% 00:07:40.967 Available Spare Threshold: 0% 00:07:40.967 Life Percentage Used: 0% 00:07:40.967 Data Units Read: 2048 00:07:40.967 Data Units Written: 1835 00:07:40.967 Host Read Commands: 104437 00:07:40.967 Host Write Commands: 102706 00:07:40.967 Controller Busy Time: 0 minutes 00:07:40.967 Power Cycles: 0 00:07:40.967 Power On Hours: 0 hours 00:07:40.967 Unsafe Shutdowns: 0 00:07:40.967 Unrecoverable Media Errors: 0 00:07:40.967 Lifetime Error Log Entries: 0 00:07:40.967 Warning Temperature Time: 0 minutes 00:07:40.967 Critical Temperature Time: 0 minutes 00:07:40.967 00:07:40.967 Number of Queues 00:07:40.967 ================ 00:07:40.967 Number of I/O Submission Queues: 64 00:07:40.967 Number of I/O Completion Queues: 64 00:07:40.967 00:07:40.967 ZNS Specific Controller Data 00:07:40.967 ============================ 00:07:40.967 Zone Append Size Limit: 0 00:07:40.967 00:07:40.967 00:07:40.967 Active Namespaces 00:07:40.967 ================= 00:07:40.967 Namespace ID:1 00:07:40.967 Error Recovery Timeout: Unlimited 00:07:40.967 Command Set Identifier: NVM (00h) 00:07:40.967 Deallocate: Supported 00:07:40.967 Deallocated/Unwritten Error: Supported 00:07:40.967 Deallocated Read Value: All 0x00 00:07:40.967 Deallocate in Write Zeroes: Not Supported 00:07:40.967 Deallocated Guard Field: 0xFFFF 00:07:40.967 Flush: Supported 00:07:40.967 Reservation: Not Supported 00:07:40.967 Namespace Sharing Capabilities: Private 00:07:40.967 Size (in LBAs): 1048576 (4GiB) 00:07:40.967 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.967 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.967 Thin Provisioning: Not Supported 00:07:40.967 Per-NS Atomic Units: No 00:07:40.967 Maximum Single Source Range Length: 128 00:07:40.967 Maximum Copy Length: 128 00:07:40.967 Maximum Source Range Count: 128 00:07:40.967 NGUID/EUI64 Never Reused: No 00:07:40.967 Namespace Write Protected: No 00:07:40.967 Number of LBA Formats: 8 00:07:40.967 Current LBA Format: LBA Format #04 00:07:40.967 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.967 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.967 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.967 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.967 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.967 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.967 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.967 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.967 00:07:40.967 NVM Specific Namespace Data 00:07:40.967 =========================== 00:07:40.967 Logical Block Storage Tag Mask: 0 00:07:40.967 Protection Information Capabilities: 00:07:40.967 16b Guard Protection Information Storage Tag Support: No 00:07:40.967 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.967 Storage Tag Check Read Support: No 00:07:40.967 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Namespace ID:2 00:07:40.967 Error Recovery Timeout: Unlimited 00:07:40.967 Command Set Identifier: NVM (00h) 00:07:40.967 Deallocate: Supported 00:07:40.967 Deallocated/Unwritten Error: Supported 00:07:40.967 Deallocated Read Value: All 0x00 00:07:40.967 Deallocate in Write Zeroes: Not Supported 00:07:40.967 Deallocated Guard Field: 0xFFFF 00:07:40.967 Flush: Supported 00:07:40.967 Reservation: Not Supported 00:07:40.967 Namespace Sharing Capabilities: Private 00:07:40.967 Size (in LBAs): 1048576 (4GiB) 00:07:40.967 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.967 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.967 Thin Provisioning: Not Supported 00:07:40.967 Per-NS Atomic Units: No 00:07:40.967 Maximum Single Source Range Length: 128 00:07:40.967 Maximum Copy Length: 128 00:07:40.967 Maximum Source Range Count: 128 00:07:40.967 NGUID/EUI64 Never Reused: No 00:07:40.967 Namespace Write Protected: No 00:07:40.967 Number of LBA Formats: 8 00:07:40.967 Current LBA Format: LBA Format #04 00:07:40.967 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.967 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.967 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.967 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.967 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.967 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.967 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.967 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.967 00:07:40.967 NVM Specific Namespace Data 00:07:40.967 =========================== 00:07:40.967 Logical Block Storage Tag Mask: 0 00:07:40.967 Protection Information Capabilities: 00:07:40.967 16b Guard Protection Information Storage Tag Support: No 00:07:40.967 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.967 Storage Tag Check Read Support: No 00:07:40.967 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.967 Namespace ID:3 00:07:40.967 Error Recovery Timeout: Unlimited 00:07:40.967 Command Set Identifier: NVM (00h) 00:07:40.967 Deallocate: Supported 00:07:40.967 Deallocated/Unwritten Error: Supported 00:07:40.967 Deallocated Read Value: All 0x00 00:07:40.967 Deallocate in Write Zeroes: Not Supported 00:07:40.967 Deallocated Guard Field: 0xFFFF 00:07:40.967 Flush: Supported 00:07:40.967 Reservation: Not Supported 00:07:40.967 Namespace Sharing Capabilities: Private 00:07:40.967 Size (in LBAs): 1048576 (4GiB) 00:07:40.967 Capacity (in LBAs): 1048576 (4GiB) 00:07:40.968 Utilization (in LBAs): 1048576 (4GiB) 00:07:40.968 Thin Provisioning: Not Supported 00:07:40.968 Per-NS Atomic Units: No 00:07:40.968 Maximum Single Source Range Length: 128 00:07:40.968 Maximum Copy Length: 128 00:07:40.968 Maximum Source Range Count: 128 00:07:40.968 NGUID/EUI64 Never Reused: No 00:07:40.968 Namespace Write Protected: No 00:07:40.968 Number of LBA Formats: 8 00:07:40.968 Current LBA Format: LBA Format #04 00:07:40.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.968 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.968 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.968 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.968 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.968 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.968 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.968 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.968 00:07:40.968 NVM Specific Namespace Data 00:07:40.968 =========================== 00:07:40.968 Logical Block Storage Tag Mask: 0 00:07:40.968 Protection Information Capabilities: 00:07:40.968 16b Guard Protection Information Storage Tag Support: No 00:07:40.968 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.228 Storage Tag Check Read Support: No 00:07:41.228 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.228 12:40:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:41.228 12:40:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:41.228 ===================================================== 00:07:41.228 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:41.228 ===================================================== 00:07:41.229 Controller Capabilities/Features 00:07:41.229 ================================ 00:07:41.229 Vendor ID: 1b36 00:07:41.229 Subsystem Vendor ID: 1af4 00:07:41.229 Serial Number: 12343 00:07:41.229 Model Number: QEMU NVMe Ctrl 00:07:41.229 Firmware Version: 8.0.0 00:07:41.229 Recommended Arb Burst: 6 00:07:41.229 IEEE OUI Identifier: 00 54 52 00:07:41.229 Multi-path I/O 00:07:41.229 May have multiple subsystem ports: No 00:07:41.229 May have multiple controllers: Yes 00:07:41.229 Associated with SR-IOV VF: No 00:07:41.229 Max Data Transfer Size: 524288 00:07:41.229 Max Number of Namespaces: 256 00:07:41.229 Max Number of I/O Queues: 64 00:07:41.229 NVMe Specification Version (VS): 1.4 00:07:41.229 NVMe Specification Version (Identify): 1.4 00:07:41.229 Maximum Queue Entries: 2048 00:07:41.229 Contiguous Queues Required: Yes 00:07:41.229 Arbitration Mechanisms Supported 00:07:41.229 Weighted Round Robin: Not Supported 00:07:41.229 Vendor Specific: Not Supported 00:07:41.229 Reset Timeout: 7500 ms 00:07:41.229 Doorbell Stride: 4 bytes 00:07:41.229 NVM Subsystem Reset: Not Supported 00:07:41.229 Command Sets Supported 00:07:41.229 NVM Command Set: Supported 00:07:41.229 Boot Partition: Not Supported 00:07:41.229 Memory Page Size Minimum: 4096 bytes 00:07:41.229 Memory Page Size Maximum: 65536 bytes 00:07:41.229 Persistent Memory Region: Not Supported 00:07:41.229 Optional Asynchronous Events Supported 00:07:41.229 Namespace Attribute Notices: Supported 00:07:41.229 Firmware Activation Notices: Not Supported 00:07:41.229 ANA Change Notices: Not Supported 00:07:41.229 PLE Aggregate Log Change Notices: Not Supported 00:07:41.229 LBA Status Info Alert Notices: Not Supported 00:07:41.229 EGE Aggregate Log Change Notices: Not Supported 00:07:41.229 Normal NVM Subsystem Shutdown event: Not Supported 00:07:41.229 Zone Descriptor Change Notices: Not Supported 00:07:41.229 Discovery Log Change Notices: Not Supported 00:07:41.229 Controller Attributes 00:07:41.229 128-bit Host Identifier: Not Supported 00:07:41.229 Non-Operational Permissive Mode: Not Supported 00:07:41.229 NVM Sets: Not Supported 00:07:41.229 Read Recovery Levels: Not Supported 00:07:41.229 Endurance Groups: Supported 00:07:41.229 Predictable Latency Mode: Not Supported 00:07:41.229 Traffic Based Keep ALive: Not Supported 00:07:41.229 Namespace Granularity: Not Supported 00:07:41.229 SQ Associations: Not Supported 00:07:41.229 UUID List: Not Supported 00:07:41.229 Multi-Domain Subsystem: Not Supported 00:07:41.229 Fixed Capacity Management: Not Supported 00:07:41.229 Variable Capacity Management: Not Supported 00:07:41.229 Delete Endurance Group: Not Supported 00:07:41.229 Delete NVM Set: Not Supported 00:07:41.229 Extended LBA Formats Supported: Supported 00:07:41.229 Flexible Data Placement Supported: Supported 00:07:41.229 00:07:41.229 Controller Memory Buffer Support 00:07:41.229 ================================ 00:07:41.229 Supported: No 00:07:41.229 00:07:41.229 Persistent Memory Region Support 00:07:41.229 ================================ 00:07:41.229 Supported: No 00:07:41.229 00:07:41.229 Admin Command Set Attributes 00:07:41.229 ============================ 00:07:41.229 Security Send/Receive: Not Supported 00:07:41.229 Format NVM: Supported 00:07:41.229 Firmware Activate/Download: Not Supported 00:07:41.229 Namespace Management: Supported 00:07:41.229 Device Self-Test: Not Supported 00:07:41.229 Directives: Supported 00:07:41.229 NVMe-MI: Not Supported 00:07:41.229 Virtualization Management: Not Supported 00:07:41.229 Doorbell Buffer Config: Supported 00:07:41.229 Get LBA Status Capability: Not Supported 00:07:41.229 Command & Feature Lockdown Capability: Not Supported 00:07:41.229 Abort Command Limit: 4 00:07:41.229 Async Event Request Limit: 4 00:07:41.229 Number of Firmware Slots: N/A 00:07:41.229 Firmware Slot 1 Read-Only: N/A 00:07:41.229 Firmware Activation Without Reset: N/A 00:07:41.229 Multiple Update Detection Support: N/A 00:07:41.229 Firmware Update Granularity: No Information Provided 00:07:41.229 Per-Namespace SMART Log: Yes 00:07:41.229 Asymmetric Namespace Access Log Page: Not Supported 00:07:41.229 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:41.229 Command Effects Log Page: Supported 00:07:41.229 Get Log Page Extended Data: Supported 00:07:41.229 Telemetry Log Pages: Not Supported 00:07:41.229 Persistent Event Log Pages: Not Supported 00:07:41.229 Supported Log Pages Log Page: May Support 00:07:41.229 Commands Supported & Effects Log Page: Not Supported 00:07:41.229 Feature Identifiers & Effects Log Page:May Support 00:07:41.229 NVMe-MI Commands & Effects Log Page: May Support 00:07:41.229 Data Area 4 for Telemetry Log: Not Supported 00:07:41.229 Error Log Page Entries Supported: 1 00:07:41.229 Keep Alive: Not Supported 00:07:41.229 00:07:41.229 NVM Command Set Attributes 00:07:41.229 ========================== 00:07:41.229 Submission Queue Entry Size 00:07:41.229 Max: 64 00:07:41.229 Min: 64 00:07:41.229 Completion Queue Entry Size 00:07:41.229 Max: 16 00:07:41.229 Min: 16 00:07:41.229 Number of Namespaces: 256 00:07:41.229 Compare Command: Supported 00:07:41.229 Write Uncorrectable Command: Not Supported 00:07:41.229 Dataset Management Command: Supported 00:07:41.229 Write Zeroes Command: Supported 00:07:41.229 Set Features Save Field: Supported 00:07:41.229 Reservations: Not Supported 00:07:41.229 Timestamp: Supported 00:07:41.229 Copy: Supported 00:07:41.229 Volatile Write Cache: Present 00:07:41.229 Atomic Write Unit (Normal): 1 00:07:41.229 Atomic Write Unit (PFail): 1 00:07:41.229 Atomic Compare & Write Unit: 1 00:07:41.229 Fused Compare & Write: Not Supported 00:07:41.229 Scatter-Gather List 00:07:41.229 SGL Command Set: Supported 00:07:41.229 SGL Keyed: Not Supported 00:07:41.229 SGL Bit Bucket Descriptor: Not Supported 00:07:41.229 SGL Metadata Pointer: Not Supported 00:07:41.229 Oversized SGL: Not Supported 00:07:41.229 SGL Metadata Address: Not Supported 00:07:41.229 SGL Offset: Not Supported 00:07:41.229 Transport SGL Data Block: Not Supported 00:07:41.229 Replay Protected Memory Block: Not Supported 00:07:41.229 00:07:41.229 Firmware Slot Information 00:07:41.229 ========================= 00:07:41.229 Active slot: 1 00:07:41.229 Slot 1 Firmware Revision: 1.0 00:07:41.229 00:07:41.229 00:07:41.229 Commands Supported and Effects 00:07:41.229 ============================== 00:07:41.229 Admin Commands 00:07:41.229 -------------- 00:07:41.229 Delete I/O Submission Queue (00h): Supported 00:07:41.229 Create I/O Submission Queue (01h): Supported 00:07:41.229 Get Log Page (02h): Supported 00:07:41.229 Delete I/O Completion Queue (04h): Supported 00:07:41.229 Create I/O Completion Queue (05h): Supported 00:07:41.229 Identify (06h): Supported 00:07:41.229 Abort (08h): Supported 00:07:41.229 Set Features (09h): Supported 00:07:41.229 Get Features (0Ah): Supported 00:07:41.229 Asynchronous Event Request (0Ch): Supported 00:07:41.229 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:41.229 Directive Send (19h): Supported 00:07:41.229 Directive Receive (1Ah): Supported 00:07:41.229 Virtualization Management (1Ch): Supported 00:07:41.229 Doorbell Buffer Config (7Ch): Supported 00:07:41.229 Format NVM (80h): Supported LBA-Change 00:07:41.229 I/O Commands 00:07:41.229 ------------ 00:07:41.230 Flush (00h): Supported LBA-Change 00:07:41.230 Write (01h): Supported LBA-Change 00:07:41.230 Read (02h): Supported 00:07:41.230 Compare (05h): Supported 00:07:41.230 Write Zeroes (08h): Supported LBA-Change 00:07:41.230 Dataset Management (09h): Supported LBA-Change 00:07:41.230 Unknown (0Ch): Supported 00:07:41.230 Unknown (12h): Supported 00:07:41.230 Copy (19h): Supported LBA-Change 00:07:41.230 Unknown (1Dh): Supported LBA-Change 00:07:41.230 00:07:41.230 Error Log 00:07:41.230 ========= 00:07:41.230 00:07:41.230 Arbitration 00:07:41.230 =========== 00:07:41.230 Arbitration Burst: no limit 00:07:41.230 00:07:41.230 Power Management 00:07:41.230 ================ 00:07:41.230 Number of Power States: 1 00:07:41.230 Current Power State: Power State #0 00:07:41.230 Power State #0: 00:07:41.230 Max Power: 25.00 W 00:07:41.230 Non-Operational State: Operational 00:07:41.230 Entry Latency: 16 microseconds 00:07:41.230 Exit Latency: 4 microseconds 00:07:41.230 Relative Read Throughput: 0 00:07:41.230 Relative Read Latency: 0 00:07:41.230 Relative Write Throughput: 0 00:07:41.230 Relative Write Latency: 0 00:07:41.230 Idle Power: Not Reported 00:07:41.230 Active Power: Not Reported 00:07:41.230 Non-Operational Permissive Mode: Not Supported 00:07:41.230 00:07:41.230 Health Information 00:07:41.230 ================== 00:07:41.230 Critical Warnings: 00:07:41.230 Available Spare Space: OK 00:07:41.230 Temperature: OK 00:07:41.230 Device Reliability: OK 00:07:41.230 Read Only: No 00:07:41.230 Volatile Memory Backup: OK 00:07:41.230 Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.230 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:41.230 Available Spare: 0% 00:07:41.230 Available Spare Threshold: 0% 00:07:41.230 Life Percentage Used: 0% 00:07:41.230 Data Units Read: 851 00:07:41.230 Data Units Written: 780 00:07:41.230 Host Read Commands: 36182 00:07:41.230 Host Write Commands: 35605 00:07:41.230 Controller Busy Time: 0 minutes 00:07:41.230 Power Cycles: 0 00:07:41.230 Power On Hours: 0 hours 00:07:41.230 Unsafe Shutdowns: 0 00:07:41.230 Unrecoverable Media Errors: 0 00:07:41.230 Lifetime Error Log Entries: 0 00:07:41.230 Warning Temperature Time: 0 minutes 00:07:41.230 Critical Temperature Time: 0 minutes 00:07:41.230 00:07:41.230 Number of Queues 00:07:41.230 ================ 00:07:41.230 Number of I/O Submission Queues: 64 00:07:41.230 Number of I/O Completion Queues: 64 00:07:41.230 00:07:41.230 ZNS Specific Controller Data 00:07:41.230 ============================ 00:07:41.230 Zone Append Size Limit: 0 00:07:41.230 00:07:41.230 00:07:41.230 Active Namespaces 00:07:41.230 ================= 00:07:41.230 Namespace ID:1 00:07:41.230 Error Recovery Timeout: Unlimited 00:07:41.230 Command Set Identifier: NVM (00h) 00:07:41.230 Deallocate: Supported 00:07:41.230 Deallocated/Unwritten Error: Supported 00:07:41.230 Deallocated Read Value: All 0x00 00:07:41.230 Deallocate in Write Zeroes: Not Supported 00:07:41.230 Deallocated Guard Field: 0xFFFF 00:07:41.230 Flush: Supported 00:07:41.230 Reservation: Not Supported 00:07:41.230 Namespace Sharing Capabilities: Multiple Controllers 00:07:41.230 Size (in LBAs): 262144 (1GiB) 00:07:41.230 Capacity (in LBAs): 262144 (1GiB) 00:07:41.230 Utilization (in LBAs): 262144 (1GiB) 00:07:41.230 Thin Provisioning: Not Supported 00:07:41.230 Per-NS Atomic Units: No 00:07:41.230 Maximum Single Source Range Length: 128 00:07:41.230 Maximum Copy Length: 128 00:07:41.230 Maximum Source Range Count: 128 00:07:41.230 NGUID/EUI64 Never Reused: No 00:07:41.230 Namespace Write Protected: No 00:07:41.230 Endurance group ID: 1 00:07:41.230 Number of LBA Formats: 8 00:07:41.230 Current LBA Format: LBA Format #04 00:07:41.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:41.230 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:41.230 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:41.230 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:41.230 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:41.230 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:41.230 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:41.230 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:41.230 00:07:41.230 Get Feature FDP: 00:07:41.230 ================ 00:07:41.230 Enabled: Yes 00:07:41.230 FDP configuration index: 0 00:07:41.230 00:07:41.230 FDP configurations log page 00:07:41.230 =========================== 00:07:41.230 Number of FDP configurations: 1 00:07:41.230 Version: 0 00:07:41.230 Size: 112 00:07:41.230 FDP Configuration Descriptor: 0 00:07:41.230 Descriptor Size: 96 00:07:41.230 Reclaim Group Identifier format: 2 00:07:41.230 FDP Volatile Write Cache: Not Present 00:07:41.230 FDP Configuration: Valid 00:07:41.230 Vendor Specific Size: 0 00:07:41.230 Number of Reclaim Groups: 2 00:07:41.230 Number of Recalim Unit Handles: 8 00:07:41.230 Max Placement Identifiers: 128 00:07:41.230 Number of Namespaces Suppprted: 256 00:07:41.230 Reclaim unit Nominal Size: 6000000 bytes 00:07:41.230 Estimated Reclaim Unit Time Limit: Not Reported 00:07:41.230 RUH Desc #000: RUH Type: Initially Isolated 00:07:41.230 RUH Desc #001: RUH Type: Initially Isolated 00:07:41.230 RUH Desc #002: RUH Type: Initially Isolated 00:07:41.230 RUH Desc #003: RUH Type: Initially Isolated 00:07:41.230 RUH Desc #004: RUH Type: Initially Isolated 00:07:41.230 RUH Desc #005: RUH Type: Initially Isolated 00:07:41.230 RUH Desc #006: RUH Type: Initially Isolated 00:07:41.230 RUH Desc #007: RUH Type: Initially Isolated 00:07:41.230 00:07:41.230 FDP reclaim unit handle usage log page 00:07:41.230 ====================================== 00:07:41.230 Number of Reclaim Unit Handles: 8 00:07:41.230 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:41.230 RUH Usage Desc #001: RUH Attributes: Unused 00:07:41.230 RUH Usage Desc #002: RUH Attributes: Unused 00:07:41.230 RUH Usage Desc #003: RUH Attributes: Unused 00:07:41.230 RUH Usage Desc #004: RUH Attributes: Unused 00:07:41.230 RUH Usage Desc #005: RUH Attributes: Unused 00:07:41.230 RUH Usage Desc #006: RUH Attributes: Unused 00:07:41.230 RUH Usage Desc #007: RUH Attributes: Unused 00:07:41.230 00:07:41.230 FDP statistics log page 00:07:41.230 ======================= 00:07:41.230 Host bytes with metadata written: 460234752 00:07:41.230 Media bytes with metadata written: 460308480 00:07:41.230 Media bytes erased: 0 00:07:41.230 00:07:41.230 FDP events log page 00:07:41.230 =================== 00:07:41.230 Number of FDP events: 0 00:07:41.230 00:07:41.230 NVM Specific Namespace Data 00:07:41.230 =========================== 00:07:41.230 Logical Block Storage Tag Mask: 0 00:07:41.230 Protection Information Capabilities: 00:07:41.230 16b Guard Protection Information Storage Tag Support: No 00:07:41.230 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:41.230 Storage Tag Check Read Support: No 00:07:41.230 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:41.230 00:07:41.230 real 0m1.184s 00:07:41.230 user 0m0.413s 00:07:41.230 sys 0m0.557s 00:07:41.230 12:40:06 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.230 12:40:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:41.230 ************************************ 00:07:41.230 END TEST nvme_identify 00:07:41.230 ************************************ 00:07:41.491 12:40:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:41.491 12:40:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.491 12:40:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.491 12:40:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.491 ************************************ 00:07:41.491 START TEST nvme_perf 00:07:41.491 ************************************ 00:07:41.491 12:40:06 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:41.491 12:40:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:42.875 Initializing NVMe Controllers 00:07:42.875 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.875 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:42.875 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:42.875 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:42.875 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:42.875 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:42.875 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:42.875 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:42.875 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:42.875 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:42.875 Initialization complete. Launching workers. 00:07:42.875 ======================================================== 00:07:42.875 Latency(us) 00:07:42.875 Device Information : IOPS MiB/s Average min max 00:07:42.875 PCIE (0000:00:10.0) NSID 1 from core 0: 15731.48 184.35 8146.87 5686.71 35251.88 00:07:42.875 PCIE (0000:00:11.0) NSID 1 from core 0: 15731.48 184.35 8135.73 5766.42 33761.67 00:07:42.875 PCIE (0000:00:13.0) NSID 1 from core 0: 15731.48 184.35 8123.48 5781.75 32674.25 00:07:42.875 PCIE (0000:00:12.0) NSID 1 from core 0: 15731.48 184.35 8110.88 5766.59 30965.51 00:07:42.875 PCIE (0000:00:12.0) NSID 2 from core 0: 15731.48 184.35 8098.04 5812.81 29308.94 00:07:42.875 PCIE (0000:00:12.0) NSID 3 from core 0: 15795.43 185.10 8052.74 5769.67 22957.29 00:07:42.875 ======================================================== 00:07:42.875 Total : 94452.82 1106.87 8111.25 5686.71 35251.88 00:07:42.875 00:07:42.875 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:42.875 ================================================================================= 00:07:42.876 1.00000% : 5923.446us 00:07:42.876 10.00000% : 6704.837us 00:07:42.876 25.00000% : 7511.434us 00:07:42.876 50.00000% : 7914.732us 00:07:42.876 75.00000% : 8318.031us 00:07:42.876 90.00000% : 8922.978us 00:07:42.876 95.00000% : 10485.760us 00:07:42.876 98.00000% : 12199.778us 00:07:42.876 99.00000% : 13712.148us 00:07:42.876 99.50000% : 27827.594us 00:07:42.876 99.90000% : 34885.317us 00:07:42.876 99.99000% : 35288.615us 00:07:42.876 99.99900% : 35288.615us 00:07:42.876 99.99990% : 35288.615us 00:07:42.876 99.99999% : 35288.615us 00:07:42.876 00:07:42.876 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:42.876 ================================================================================= 00:07:42.876 1.00000% : 5999.065us 00:07:42.876 10.00000% : 6654.425us 00:07:42.876 25.00000% : 7561.846us 00:07:42.876 50.00000% : 7914.732us 00:07:42.876 75.00000% : 8267.618us 00:07:42.876 90.00000% : 8822.154us 00:07:42.876 95.00000% : 10485.760us 00:07:42.876 98.00000% : 12351.015us 00:07:42.876 99.00000% : 13913.797us 00:07:42.876 99.50000% : 26416.049us 00:07:42.876 99.90000% : 33473.772us 00:07:42.876 99.99000% : 33877.071us 00:07:42.876 99.99900% : 33877.071us 00:07:42.876 99.99990% : 33877.071us 00:07:42.876 99.99999% : 33877.071us 00:07:42.876 00:07:42.876 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:42.876 ================================================================================= 00:07:42.876 1.00000% : 5973.858us 00:07:42.876 10.00000% : 6654.425us 00:07:42.876 25.00000% : 7511.434us 00:07:42.876 50.00000% : 7914.732us 00:07:42.876 75.00000% : 8267.618us 00:07:42.876 90.00000% : 8771.742us 00:07:42.876 95.00000% : 10586.585us 00:07:42.876 98.00000% : 12351.015us 00:07:42.876 99.00000% : 14518.745us 00:07:42.876 99.50000% : 25811.102us 00:07:42.876 99.90000% : 32465.526us 00:07:42.876 99.99000% : 32667.175us 00:07:42.876 99.99900% : 32868.825us 00:07:42.876 99.99990% : 32868.825us 00:07:42.876 99.99999% : 32868.825us 00:07:42.876 00:07:42.876 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:42.876 ================================================================================= 00:07:42.876 1.00000% : 5973.858us 00:07:42.876 10.00000% : 6654.425us 00:07:42.876 25.00000% : 7561.846us 00:07:42.876 50.00000% : 7914.732us 00:07:42.876 75.00000% : 8267.618us 00:07:42.876 90.00000% : 8822.154us 00:07:42.876 95.00000% : 10687.409us 00:07:42.876 98.00000% : 12149.366us 00:07:42.876 99.00000% : 14115.446us 00:07:42.876 99.50000% : 24298.732us 00:07:42.876 99.90000% : 30650.683us 00:07:42.876 99.99000% : 31053.982us 00:07:42.876 99.99900% : 31053.982us 00:07:42.876 99.99990% : 31053.982us 00:07:42.876 99.99999% : 31053.982us 00:07:42.876 00:07:42.876 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:42.876 ================================================================================= 00:07:42.876 1.00000% : 5999.065us 00:07:42.876 10.00000% : 6704.837us 00:07:42.876 25.00000% : 7561.846us 00:07:42.876 50.00000% : 7914.732us 00:07:42.876 75.00000% : 8267.618us 00:07:42.876 90.00000% : 8872.566us 00:07:42.876 95.00000% : 10536.172us 00:07:42.876 98.00000% : 12149.366us 00:07:42.876 99.00000% : 13611.323us 00:07:42.876 99.50000% : 22887.188us 00:07:42.876 99.90000% : 29037.489us 00:07:42.876 99.99000% : 29440.788us 00:07:42.876 99.99900% : 29440.788us 00:07:42.876 99.99990% : 29440.788us 00:07:42.876 99.99999% : 29440.788us 00:07:42.876 00:07:42.876 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:42.876 ================================================================================= 00:07:42.876 1.00000% : 5999.065us 00:07:42.876 10.00000% : 6654.425us 00:07:42.876 25.00000% : 7561.846us 00:07:42.876 50.00000% : 7914.732us 00:07:42.876 75.00000% : 8267.618us 00:07:42.876 90.00000% : 8922.978us 00:07:42.876 95.00000% : 10485.760us 00:07:42.876 98.00000% : 12098.954us 00:07:42.876 99.00000% : 13712.148us 00:07:42.876 99.50000% : 15627.815us 00:07:42.876 99.90000% : 22685.538us 00:07:42.876 99.99000% : 22988.012us 00:07:42.876 99.99900% : 22988.012us 00:07:42.876 99.99990% : 22988.012us 00:07:42.876 99.99999% : 22988.012us 00:07:42.876 00:07:42.876 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:42.876 ============================================================================== 00:07:42.876 Range in us Cumulative IO count 00:07:42.876 5671.385 - 5696.591: 0.0254% ( 4) 00:07:42.876 5696.591 - 5721.797: 0.0572% ( 5) 00:07:42.876 5721.797 - 5747.003: 0.1016% ( 7) 00:07:42.876 5747.003 - 5772.209: 0.1842% ( 13) 00:07:42.876 5772.209 - 5797.415: 0.2731% ( 14) 00:07:42.876 5797.415 - 5822.622: 0.3684% ( 15) 00:07:42.876 5822.622 - 5847.828: 0.5208% ( 24) 00:07:42.876 5847.828 - 5873.034: 0.7241% ( 32) 00:07:42.876 5873.034 - 5898.240: 0.8956% ( 27) 00:07:42.876 5898.240 - 5923.446: 1.0988% ( 32) 00:07:42.876 5923.446 - 5948.652: 1.3720% ( 43) 00:07:42.876 5948.652 - 5973.858: 1.6451% ( 43) 00:07:42.876 5973.858 - 5999.065: 1.8674% ( 35) 00:07:42.876 5999.065 - 6024.271: 2.0897% ( 35) 00:07:42.876 6024.271 - 6049.477: 2.3056% ( 34) 00:07:42.876 6049.477 - 6074.683: 2.5788% ( 43) 00:07:42.876 6074.683 - 6099.889: 2.8582% ( 44) 00:07:42.876 6099.889 - 6125.095: 3.0996% ( 38) 00:07:42.876 6125.095 - 6150.302: 3.3727% ( 43) 00:07:42.876 6150.302 - 6175.508: 3.6395% ( 42) 00:07:42.876 6175.508 - 6200.714: 3.9698% ( 52) 00:07:42.876 6200.714 - 6225.920: 4.2175% ( 39) 00:07:42.876 6225.920 - 6251.126: 4.5160% ( 47) 00:07:42.876 6251.126 - 6276.332: 4.8145% ( 47) 00:07:42.876 6276.332 - 6301.538: 5.0686% ( 40) 00:07:42.876 6301.538 - 6326.745: 5.4116% ( 54) 00:07:42.876 6326.745 - 6351.951: 5.7038% ( 46) 00:07:42.876 6351.951 - 6377.157: 6.0340% ( 52) 00:07:42.876 6377.157 - 6402.363: 6.3643% ( 52) 00:07:42.876 6402.363 - 6427.569: 6.6946% ( 52) 00:07:42.876 6427.569 - 6452.775: 7.0503% ( 56) 00:07:42.876 6452.775 - 6503.188: 7.7172% ( 105) 00:07:42.876 6503.188 - 6553.600: 8.4286% ( 112) 00:07:42.876 6553.600 - 6604.012: 9.1273% ( 110) 00:07:42.876 6604.012 - 6654.425: 9.8768% ( 118) 00:07:42.876 6654.425 - 6704.837: 10.6136% ( 116) 00:07:42.876 6704.837 - 6755.249: 11.2995% ( 108) 00:07:42.876 6755.249 - 6805.662: 11.9982% ( 110) 00:07:42.876 6805.662 - 6856.074: 12.6397% ( 101) 00:07:42.876 6856.074 - 6906.486: 13.1860% ( 86) 00:07:42.876 6906.486 - 6956.898: 13.6687% ( 76) 00:07:42.876 6956.898 - 7007.311: 14.2530% ( 92) 00:07:42.876 7007.311 - 7057.723: 14.7548% ( 79) 00:07:42.876 7057.723 - 7108.135: 15.2820% ( 83) 00:07:42.876 7108.135 - 7158.548: 15.8981% ( 97) 00:07:42.876 7158.548 - 7208.960: 16.6794% ( 123) 00:07:42.876 7208.960 - 7259.372: 17.6702% ( 156) 00:07:42.876 7259.372 - 7309.785: 18.8008% ( 178) 00:07:42.876 7309.785 - 7360.197: 20.4776% ( 264) 00:07:42.876 7360.197 - 7410.609: 22.2624% ( 281) 00:07:42.876 7410.609 - 7461.022: 24.3331% ( 326) 00:07:42.876 7461.022 - 7511.434: 26.5307% ( 346) 00:07:42.876 7511.434 - 7561.846: 29.2937% ( 435) 00:07:42.876 7561.846 - 7612.258: 32.0884% ( 440) 00:07:42.876 7612.258 - 7662.671: 35.1308% ( 479) 00:07:42.876 7662.671 - 7713.083: 38.2241% ( 487) 00:07:42.876 7713.083 - 7763.495: 41.4444% ( 507) 00:07:42.876 7763.495 - 7813.908: 44.5249% ( 485) 00:07:42.876 7813.908 - 7864.320: 47.9484% ( 539) 00:07:42.876 7864.320 - 7914.732: 51.3402% ( 534) 00:07:42.876 7914.732 - 7965.145: 54.6875% ( 527) 00:07:42.876 7965.145 - 8015.557: 57.9776% ( 518) 00:07:42.876 8015.557 - 8065.969: 61.2868% ( 521) 00:07:42.876 8065.969 - 8116.382: 64.6913% ( 536) 00:07:42.876 8116.382 - 8166.794: 67.9433% ( 512) 00:07:42.876 8166.794 - 8217.206: 71.0048% ( 482) 00:07:42.876 8217.206 - 8267.618: 73.9837% ( 469) 00:07:42.876 8267.618 - 8318.031: 76.6895% ( 426) 00:07:42.876 8318.031 - 8368.443: 78.9571% ( 357) 00:07:42.876 8368.443 - 8418.855: 81.0595% ( 331) 00:07:42.876 8418.855 - 8469.268: 82.8443% ( 281) 00:07:42.876 8469.268 - 8519.680: 84.5020% ( 261) 00:07:42.876 8519.680 - 8570.092: 85.7406% ( 195) 00:07:42.876 8570.092 - 8620.505: 86.7696% ( 162) 00:07:42.876 8620.505 - 8670.917: 87.6651% ( 141) 00:07:42.876 8670.917 - 8721.329: 88.3575% ( 109) 00:07:42.876 8721.329 - 8771.742: 88.9291% ( 90) 00:07:42.876 8771.742 - 8822.154: 89.4563% ( 83) 00:07:42.876 8822.154 - 8872.566: 89.9073% ( 71) 00:07:42.876 8872.566 - 8922.978: 90.1804% ( 43) 00:07:42.876 8922.978 - 8973.391: 90.4345% ( 40) 00:07:42.876 8973.391 - 9023.803: 90.6186% ( 29) 00:07:42.876 9023.803 - 9074.215: 90.7457% ( 20) 00:07:42.876 9074.215 - 9124.628: 90.9045% ( 25) 00:07:42.876 9124.628 - 9175.040: 91.0442% ( 22) 00:07:42.876 9175.040 - 9225.452: 91.1966% ( 24) 00:07:42.876 9225.452 - 9275.865: 91.2983% ( 16) 00:07:42.877 9275.865 - 9326.277: 91.4062% ( 17) 00:07:42.877 9326.277 - 9376.689: 91.5396% ( 21) 00:07:42.877 9376.689 - 9427.102: 91.6730% ( 21) 00:07:42.877 9427.102 - 9477.514: 91.8191% ( 23) 00:07:42.877 9477.514 - 9527.926: 91.9398% ( 19) 00:07:42.877 9527.926 - 9578.338: 92.0859% ( 23) 00:07:42.877 9578.338 - 9628.751: 92.2383% ( 24) 00:07:42.877 9628.751 - 9679.163: 92.3908% ( 24) 00:07:42.877 9679.163 - 9729.575: 92.5495% ( 25) 00:07:42.877 9729.575 - 9779.988: 92.7274% ( 28) 00:07:42.877 9779.988 - 9830.400: 92.9243% ( 31) 00:07:42.877 9830.400 - 9880.812: 93.1275% ( 32) 00:07:42.877 9880.812 - 9931.225: 93.2990% ( 27) 00:07:42.877 9931.225 - 9981.637: 93.4642% ( 26) 00:07:42.877 9981.637 - 10032.049: 93.6738% ( 33) 00:07:42.877 10032.049 - 10082.462: 93.8580% ( 29) 00:07:42.877 10082.462 - 10132.874: 94.0168% ( 25) 00:07:42.877 10132.874 - 10183.286: 94.2010% ( 29) 00:07:42.877 10183.286 - 10233.698: 94.3852% ( 29) 00:07:42.877 10233.698 - 10284.111: 94.5757% ( 30) 00:07:42.877 10284.111 - 10334.523: 94.7027% ( 20) 00:07:42.877 10334.523 - 10384.935: 94.8806% ( 28) 00:07:42.877 10384.935 - 10435.348: 94.9759% ( 15) 00:07:42.877 10435.348 - 10485.760: 95.1283% ( 24) 00:07:42.877 10485.760 - 10536.172: 95.2934% ( 26) 00:07:42.877 10536.172 - 10586.585: 95.4332% ( 22) 00:07:42.877 10586.585 - 10636.997: 95.5666% ( 21) 00:07:42.877 10636.997 - 10687.409: 95.6999% ( 21) 00:07:42.877 10687.409 - 10737.822: 95.8016% ( 16) 00:07:42.877 10737.822 - 10788.234: 95.9477% ( 23) 00:07:42.877 10788.234 - 10838.646: 96.0556% ( 17) 00:07:42.877 10838.646 - 10889.058: 96.1636% ( 17) 00:07:42.877 10889.058 - 10939.471: 96.2398% ( 12) 00:07:42.877 10939.471 - 10989.883: 96.3415% ( 16) 00:07:42.877 10989.883 - 11040.295: 96.4113% ( 11) 00:07:42.877 11040.295 - 11090.708: 96.4939% ( 13) 00:07:42.877 11090.708 - 11141.120: 96.5701% ( 12) 00:07:42.877 11141.120 - 11191.532: 96.6463% ( 12) 00:07:42.877 11191.532 - 11241.945: 96.7543% ( 17) 00:07:42.877 11241.945 - 11292.357: 96.8115% ( 9) 00:07:42.877 11292.357 - 11342.769: 96.8686% ( 9) 00:07:42.877 11342.769 - 11393.182: 96.9195% ( 8) 00:07:42.877 11393.182 - 11443.594: 96.9703% ( 8) 00:07:42.877 11443.594 - 11494.006: 97.0211% ( 8) 00:07:42.877 11494.006 - 11544.418: 97.0783% ( 9) 00:07:42.877 11544.418 - 11594.831: 97.1608% ( 13) 00:07:42.877 11594.831 - 11645.243: 97.2053% ( 7) 00:07:42.877 11645.243 - 11695.655: 97.2688% ( 10) 00:07:42.877 11695.655 - 11746.068: 97.3260% ( 9) 00:07:42.877 11746.068 - 11796.480: 97.3895% ( 10) 00:07:42.877 11796.480 - 11846.892: 97.4593% ( 11) 00:07:42.877 11846.892 - 11897.305: 97.5419% ( 13) 00:07:42.877 11897.305 - 11947.717: 97.6308% ( 14) 00:07:42.877 11947.717 - 11998.129: 97.7134% ( 13) 00:07:42.877 11998.129 - 12048.542: 97.8150% ( 16) 00:07:42.877 12048.542 - 12098.954: 97.8913% ( 12) 00:07:42.877 12098.954 - 12149.366: 97.9484% ( 9) 00:07:42.877 12149.366 - 12199.778: 98.0183% ( 11) 00:07:42.877 12199.778 - 12250.191: 98.0691% ( 8) 00:07:42.877 12250.191 - 12300.603: 98.1072% ( 6) 00:07:42.877 12300.603 - 12351.015: 98.1517% ( 7) 00:07:42.877 12351.015 - 12401.428: 98.1898% ( 6) 00:07:42.877 12401.428 - 12451.840: 98.2470% ( 9) 00:07:42.877 12451.840 - 12502.252: 98.2851% ( 6) 00:07:42.877 12502.252 - 12552.665: 98.3105% ( 4) 00:07:42.877 12552.665 - 12603.077: 98.3359% ( 4) 00:07:42.877 12603.077 - 12653.489: 98.3549% ( 3) 00:07:42.877 12653.489 - 12703.902: 98.3803% ( 4) 00:07:42.877 12703.902 - 12754.314: 98.4057% ( 4) 00:07:42.877 12754.314 - 12804.726: 98.4248% ( 3) 00:07:42.877 12804.726 - 12855.138: 98.4502% ( 4) 00:07:42.877 12855.138 - 12905.551: 98.4693% ( 3) 00:07:42.877 12905.551 - 13006.375: 98.5264% ( 9) 00:07:42.877 13006.375 - 13107.200: 98.5645% ( 6) 00:07:42.877 13107.200 - 13208.025: 98.6344% ( 11) 00:07:42.877 13208.025 - 13308.849: 98.7297% ( 15) 00:07:42.877 13308.849 - 13409.674: 98.8122% ( 13) 00:07:42.877 13409.674 - 13510.498: 98.8821% ( 11) 00:07:42.877 13510.498 - 13611.323: 98.9393% ( 9) 00:07:42.877 13611.323 - 13712.148: 99.0028% ( 10) 00:07:42.877 13712.148 - 13812.972: 99.0473% ( 7) 00:07:42.877 13812.972 - 13913.797: 99.0854% ( 6) 00:07:42.877 13913.797 - 14014.622: 99.1235% ( 6) 00:07:42.877 14014.622 - 14115.446: 99.1679% ( 7) 00:07:42.877 14115.446 - 14216.271: 99.1870% ( 3) 00:07:42.877 26214.400 - 26416.049: 99.1933% ( 1) 00:07:42.877 26416.049 - 26617.698: 99.2442% ( 8) 00:07:42.877 26617.698 - 26819.348: 99.2950% ( 8) 00:07:42.877 26819.348 - 27020.997: 99.3394% ( 7) 00:07:42.877 27020.997 - 27222.646: 99.3966% ( 9) 00:07:42.877 27222.646 - 27424.295: 99.4411% ( 7) 00:07:42.877 27424.295 - 27625.945: 99.4855% ( 7) 00:07:42.877 27625.945 - 27827.594: 99.5363% ( 8) 00:07:42.877 27827.594 - 28029.243: 99.5935% ( 9) 00:07:42.877 33473.772 - 33675.422: 99.6062% ( 2) 00:07:42.877 33675.422 - 33877.071: 99.6507% ( 7) 00:07:42.877 33877.071 - 34078.720: 99.7078% ( 9) 00:07:42.877 34078.720 - 34280.369: 99.7523% ( 7) 00:07:42.877 34280.369 - 34482.018: 99.8031% ( 8) 00:07:42.877 34482.018 - 34683.668: 99.8539% ( 8) 00:07:42.877 34683.668 - 34885.317: 99.9111% ( 9) 00:07:42.877 34885.317 - 35086.966: 99.9555% ( 7) 00:07:42.877 35086.966 - 35288.615: 100.0000% ( 7) 00:07:42.877 00:07:42.877 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:42.877 ============================================================================== 00:07:42.877 Range in us Cumulative IO count 00:07:42.877 5747.003 - 5772.209: 0.0064% ( 1) 00:07:42.877 5772.209 - 5797.415: 0.0381% ( 5) 00:07:42.877 5797.415 - 5822.622: 0.1080% ( 11) 00:07:42.877 5822.622 - 5847.828: 0.2350% ( 20) 00:07:42.877 5847.828 - 5873.034: 0.3112% ( 12) 00:07:42.877 5873.034 - 5898.240: 0.3938% ( 13) 00:07:42.877 5898.240 - 5923.446: 0.5018% ( 17) 00:07:42.877 5923.446 - 5948.652: 0.7114% ( 33) 00:07:42.877 5948.652 - 5973.858: 0.9718% ( 41) 00:07:42.877 5973.858 - 5999.065: 1.2005% ( 36) 00:07:42.877 5999.065 - 6024.271: 1.4291% ( 36) 00:07:42.877 6024.271 - 6049.477: 1.7785% ( 55) 00:07:42.877 6049.477 - 6074.683: 2.0325% ( 40) 00:07:42.877 6074.683 - 6099.889: 2.3247% ( 46) 00:07:42.877 6099.889 - 6125.095: 2.6296% ( 48) 00:07:42.877 6125.095 - 6150.302: 2.8582% ( 36) 00:07:42.877 6150.302 - 6175.508: 3.1504% ( 46) 00:07:42.877 6175.508 - 6200.714: 3.4362% ( 45) 00:07:42.877 6200.714 - 6225.920: 3.7538% ( 50) 00:07:42.877 6225.920 - 6251.126: 4.1286% ( 59) 00:07:42.877 6251.126 - 6276.332: 4.4906% ( 57) 00:07:42.877 6276.332 - 6301.538: 4.8272% ( 53) 00:07:42.877 6301.538 - 6326.745: 5.1893% ( 57) 00:07:42.877 6326.745 - 6351.951: 5.5577% ( 58) 00:07:42.877 6351.951 - 6377.157: 5.9261% ( 58) 00:07:42.877 6377.157 - 6402.363: 6.3008% ( 59) 00:07:42.877 6402.363 - 6427.569: 6.6756% ( 59) 00:07:42.877 6427.569 - 6452.775: 7.0694% ( 62) 00:07:42.877 6452.775 - 6503.188: 7.8443% ( 122) 00:07:42.877 6503.188 - 6553.600: 8.5938% ( 118) 00:07:42.877 6553.600 - 6604.012: 9.4258% ( 131) 00:07:42.877 6604.012 - 6654.425: 10.2261% ( 126) 00:07:42.877 6654.425 - 6704.837: 10.9439% ( 113) 00:07:42.877 6704.837 - 6755.249: 11.6362% ( 109) 00:07:42.877 6755.249 - 6805.662: 12.1951% ( 88) 00:07:42.877 6805.662 - 6856.074: 12.6778% ( 76) 00:07:42.877 6856.074 - 6906.486: 13.1161% ( 69) 00:07:42.877 6906.486 - 6956.898: 13.5861% ( 74) 00:07:42.877 6956.898 - 7007.311: 14.1514% ( 89) 00:07:42.877 7007.311 - 7057.723: 14.6786% ( 83) 00:07:42.877 7057.723 - 7108.135: 15.2757% ( 94) 00:07:42.877 7108.135 - 7158.548: 15.8727% ( 94) 00:07:42.877 7158.548 - 7208.960: 16.5142% ( 101) 00:07:42.877 7208.960 - 7259.372: 17.2637% ( 118) 00:07:42.877 7259.372 - 7309.785: 18.0894% ( 130) 00:07:42.877 7309.785 - 7360.197: 19.2518% ( 183) 00:07:42.877 7360.197 - 7410.609: 20.7762% ( 240) 00:07:42.877 7410.609 - 7461.022: 22.6753% ( 299) 00:07:42.877 7461.022 - 7511.434: 24.8539% ( 343) 00:07:42.877 7511.434 - 7561.846: 27.1977% ( 369) 00:07:42.877 7561.846 - 7612.258: 30.0686% ( 452) 00:07:42.877 7612.258 - 7662.671: 33.1999% ( 493) 00:07:42.877 7662.671 - 7713.083: 36.6171% ( 538) 00:07:42.877 7713.083 - 7763.495: 40.1486% ( 556) 00:07:42.877 7763.495 - 7813.908: 43.9088% ( 592) 00:07:42.877 7813.908 - 7864.320: 47.5737% ( 577) 00:07:42.877 7864.320 - 7914.732: 51.3402% ( 593) 00:07:42.877 7914.732 - 7965.145: 55.2655% ( 618) 00:07:42.877 7965.145 - 8015.557: 59.1082% ( 605) 00:07:42.877 8015.557 - 8065.969: 62.9446% ( 604) 00:07:42.877 8065.969 - 8116.382: 66.7619% ( 601) 00:07:42.877 8116.382 - 8166.794: 70.4205% ( 576) 00:07:42.877 8166.794 - 8217.206: 73.6408% ( 507) 00:07:42.878 8217.206 - 8267.618: 76.6578% ( 475) 00:07:42.878 8267.618 - 8318.031: 79.2048% ( 401) 00:07:42.878 8318.031 - 8368.443: 81.4660% ( 356) 00:07:42.878 8368.443 - 8418.855: 83.3841% ( 302) 00:07:42.878 8418.855 - 8469.268: 85.0546% ( 263) 00:07:42.878 8469.268 - 8519.680: 86.4139% ( 214) 00:07:42.878 8519.680 - 8570.092: 87.4682% ( 166) 00:07:42.878 8570.092 - 8620.505: 88.3257% ( 135) 00:07:42.878 8620.505 - 8670.917: 88.9101% ( 92) 00:07:42.878 8670.917 - 8721.329: 89.4627% ( 87) 00:07:42.878 8721.329 - 8771.742: 89.9263% ( 73) 00:07:42.878 8771.742 - 8822.154: 90.2693% ( 54) 00:07:42.878 8822.154 - 8872.566: 90.5234% ( 40) 00:07:42.878 8872.566 - 8922.978: 90.6949% ( 27) 00:07:42.878 8922.978 - 8973.391: 90.8283% ( 21) 00:07:42.878 8973.391 - 9023.803: 90.9299% ( 16) 00:07:42.878 9023.803 - 9074.215: 91.0124% ( 13) 00:07:42.878 9074.215 - 9124.628: 91.1204% ( 17) 00:07:42.878 9124.628 - 9175.040: 91.1712% ( 8) 00:07:42.878 9175.040 - 9225.452: 91.2475% ( 12) 00:07:42.878 9225.452 - 9275.865: 91.3427% ( 15) 00:07:42.878 9275.865 - 9326.277: 91.4126% ( 11) 00:07:42.878 9326.277 - 9376.689: 91.5206% ( 17) 00:07:42.878 9376.689 - 9427.102: 91.6222% ( 16) 00:07:42.878 9427.102 - 9477.514: 91.7492% ( 20) 00:07:42.878 9477.514 - 9527.926: 91.8509% ( 16) 00:07:42.878 9527.926 - 9578.338: 91.9461% ( 15) 00:07:42.878 9578.338 - 9628.751: 92.0859% ( 22) 00:07:42.878 9628.751 - 9679.163: 92.2383% ( 24) 00:07:42.878 9679.163 - 9729.575: 92.3971% ( 25) 00:07:42.878 9729.575 - 9779.988: 92.5813% ( 29) 00:07:42.878 9779.988 - 9830.400: 92.7846% ( 32) 00:07:42.878 9830.400 - 9880.812: 92.9497% ( 26) 00:07:42.878 9880.812 - 9931.225: 93.1402% ( 30) 00:07:42.878 9931.225 - 9981.637: 93.3181% ( 28) 00:07:42.878 9981.637 - 10032.049: 93.5150% ( 31) 00:07:42.878 10032.049 - 10082.462: 93.6928% ( 28) 00:07:42.878 10082.462 - 10132.874: 93.8643% ( 27) 00:07:42.878 10132.874 - 10183.286: 94.0612% ( 31) 00:07:42.878 10183.286 - 10233.698: 94.2391% ( 28) 00:07:42.878 10233.698 - 10284.111: 94.3852% ( 23) 00:07:42.878 10284.111 - 10334.523: 94.5694% ( 29) 00:07:42.878 10334.523 - 10384.935: 94.7409% ( 27) 00:07:42.878 10384.935 - 10435.348: 94.9187% ( 28) 00:07:42.878 10435.348 - 10485.760: 95.0775% ( 25) 00:07:42.878 10485.760 - 10536.172: 95.1918% ( 18) 00:07:42.878 10536.172 - 10586.585: 95.3061% ( 18) 00:07:42.878 10586.585 - 10636.997: 95.4141% ( 17) 00:07:42.878 10636.997 - 10687.409: 95.5030% ( 14) 00:07:42.878 10687.409 - 10737.822: 95.5983% ( 15) 00:07:42.878 10737.822 - 10788.234: 95.6872% ( 14) 00:07:42.878 10788.234 - 10838.646: 95.7825% ( 15) 00:07:42.878 10838.646 - 10889.058: 95.8524% ( 11) 00:07:42.878 10889.058 - 10939.471: 95.9159% ( 10) 00:07:42.878 10939.471 - 10989.883: 95.9794% ( 10) 00:07:42.878 10989.883 - 11040.295: 96.0302% ( 8) 00:07:42.878 11040.295 - 11090.708: 96.0683% ( 6) 00:07:42.878 11090.708 - 11141.120: 96.1065% ( 6) 00:07:42.878 11141.120 - 11191.532: 96.1636% ( 9) 00:07:42.878 11191.532 - 11241.945: 96.2335% ( 11) 00:07:42.878 11241.945 - 11292.357: 96.3351% ( 16) 00:07:42.878 11292.357 - 11342.769: 96.4113% ( 12) 00:07:42.878 11342.769 - 11393.182: 96.5193% ( 17) 00:07:42.878 11393.182 - 11443.594: 96.6082% ( 14) 00:07:42.878 11443.594 - 11494.006: 96.7289% ( 19) 00:07:42.878 11494.006 - 11544.418: 96.8369% ( 17) 00:07:42.878 11544.418 - 11594.831: 96.9385% ( 16) 00:07:42.878 11594.831 - 11645.243: 97.0401% ( 16) 00:07:42.878 11645.243 - 11695.655: 97.1354% ( 15) 00:07:42.878 11695.655 - 11746.068: 97.2243% ( 14) 00:07:42.878 11746.068 - 11796.480: 97.3133% ( 14) 00:07:42.878 11796.480 - 11846.892: 97.3831% ( 11) 00:07:42.878 11846.892 - 11897.305: 97.4593% ( 12) 00:07:42.878 11897.305 - 11947.717: 97.5483% ( 14) 00:07:42.878 11947.717 - 11998.129: 97.6435% ( 15) 00:07:42.878 11998.129 - 12048.542: 97.7388% ( 15) 00:07:42.878 12048.542 - 12098.954: 97.8087% ( 11) 00:07:42.878 12098.954 - 12149.366: 97.8659% ( 9) 00:07:42.878 12149.366 - 12199.778: 97.9103% ( 7) 00:07:42.878 12199.778 - 12250.191: 97.9421% ( 5) 00:07:42.878 12250.191 - 12300.603: 97.9802% ( 6) 00:07:42.878 12300.603 - 12351.015: 98.0183% ( 6) 00:07:42.878 12351.015 - 12401.428: 98.0564% ( 6) 00:07:42.878 12401.428 - 12451.840: 98.0945% ( 6) 00:07:42.878 12451.840 - 12502.252: 98.1517% ( 9) 00:07:42.878 12502.252 - 12552.665: 98.1961% ( 7) 00:07:42.878 12552.665 - 12603.077: 98.2533% ( 9) 00:07:42.878 12603.077 - 12653.489: 98.2978% ( 7) 00:07:42.878 12653.489 - 12703.902: 98.3486% ( 8) 00:07:42.878 12703.902 - 12754.314: 98.3930% ( 7) 00:07:42.878 12754.314 - 12804.726: 98.4184% ( 4) 00:07:42.878 12804.726 - 12855.138: 98.4502% ( 5) 00:07:42.878 12855.138 - 12905.551: 98.4756% ( 4) 00:07:42.878 12905.551 - 13006.375: 98.5328% ( 9) 00:07:42.878 13006.375 - 13107.200: 98.5772% ( 7) 00:07:42.878 13107.200 - 13208.025: 98.6026% ( 4) 00:07:42.878 13208.025 - 13308.849: 98.6344% ( 5) 00:07:42.878 13308.849 - 13409.674: 98.6789% ( 7) 00:07:42.878 13409.674 - 13510.498: 98.7551% ( 12) 00:07:42.878 13510.498 - 13611.323: 98.8249% ( 11) 00:07:42.878 13611.323 - 13712.148: 98.9075% ( 13) 00:07:42.878 13712.148 - 13812.972: 98.9837% ( 12) 00:07:42.878 13812.972 - 13913.797: 99.0346% ( 8) 00:07:42.878 13913.797 - 14014.622: 99.0790% ( 7) 00:07:42.878 14014.622 - 14115.446: 99.1298% ( 8) 00:07:42.878 14115.446 - 14216.271: 99.1806% ( 8) 00:07:42.878 14216.271 - 14317.095: 99.1870% ( 1) 00:07:42.878 25004.505 - 25105.329: 99.1933% ( 1) 00:07:42.878 25105.329 - 25206.154: 99.2188% ( 4) 00:07:42.878 25206.154 - 25306.978: 99.2442% ( 4) 00:07:42.878 25306.978 - 25407.803: 99.2696% ( 4) 00:07:42.878 25407.803 - 25508.628: 99.3013% ( 5) 00:07:42.878 25508.628 - 25609.452: 99.3267% ( 4) 00:07:42.878 25609.452 - 25710.277: 99.3521% ( 4) 00:07:42.878 25710.277 - 25811.102: 99.3775% ( 4) 00:07:42.878 25811.102 - 26012.751: 99.4347% ( 9) 00:07:42.878 26012.751 - 26214.400: 99.4855% ( 8) 00:07:42.878 26214.400 - 26416.049: 99.5363% ( 8) 00:07:42.878 26416.049 - 26617.698: 99.5935% ( 9) 00:07:42.878 32062.228 - 32263.877: 99.6062% ( 2) 00:07:42.878 32263.877 - 32465.526: 99.6570% ( 8) 00:07:42.878 32465.526 - 32667.175: 99.7015% ( 7) 00:07:42.878 32667.175 - 32868.825: 99.7586% ( 9) 00:07:42.878 32868.825 - 33070.474: 99.8095% ( 8) 00:07:42.878 33070.474 - 33272.123: 99.8603% ( 8) 00:07:42.878 33272.123 - 33473.772: 99.9174% ( 9) 00:07:42.878 33473.772 - 33675.422: 99.9746% ( 9) 00:07:42.878 33675.422 - 33877.071: 100.0000% ( 4) 00:07:42.878 00:07:42.878 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:42.878 ============================================================================== 00:07:42.878 Range in us Cumulative IO count 00:07:42.878 5772.209 - 5797.415: 0.0254% ( 4) 00:07:42.878 5797.415 - 5822.622: 0.0826% ( 9) 00:07:42.878 5822.622 - 5847.828: 0.1969% ( 18) 00:07:42.878 5847.828 - 5873.034: 0.2985% ( 16) 00:07:42.878 5873.034 - 5898.240: 0.4192% ( 19) 00:07:42.878 5898.240 - 5923.446: 0.6034% ( 29) 00:07:42.878 5923.446 - 5948.652: 0.7876% ( 29) 00:07:42.878 5948.652 - 5973.858: 1.0036% ( 34) 00:07:42.878 5973.858 - 5999.065: 1.2894% ( 45) 00:07:42.878 5999.065 - 6024.271: 1.5943% ( 48) 00:07:42.878 6024.271 - 6049.477: 1.8801% ( 45) 00:07:42.878 6049.477 - 6074.683: 2.1659% ( 45) 00:07:42.878 6074.683 - 6099.889: 2.4454% ( 44) 00:07:42.878 6099.889 - 6125.095: 2.7058% ( 41) 00:07:42.878 6125.095 - 6150.302: 2.9726% ( 42) 00:07:42.878 6150.302 - 6175.508: 3.2584% ( 45) 00:07:42.878 6175.508 - 6200.714: 3.5442% ( 45) 00:07:42.878 6200.714 - 6225.920: 3.8237% ( 44) 00:07:42.878 6225.920 - 6251.126: 4.1476% ( 51) 00:07:42.878 6251.126 - 6276.332: 4.4588% ( 49) 00:07:42.878 6276.332 - 6301.538: 4.7891% ( 52) 00:07:42.878 6301.538 - 6326.745: 5.1067% ( 50) 00:07:42.878 6326.745 - 6351.951: 5.4370% ( 52) 00:07:42.878 6351.951 - 6377.157: 5.7990% ( 57) 00:07:42.878 6377.157 - 6402.363: 6.1547% ( 56) 00:07:42.878 6402.363 - 6427.569: 6.5168% ( 57) 00:07:42.878 6427.569 - 6452.775: 6.9042% ( 61) 00:07:42.878 6452.775 - 6503.188: 7.6918% ( 124) 00:07:42.878 6503.188 - 6553.600: 8.4159% ( 114) 00:07:42.878 6553.600 - 6604.012: 9.1972% ( 123) 00:07:42.878 6604.012 - 6654.425: 10.0165% ( 129) 00:07:42.878 6654.425 - 6704.837: 10.7215% ( 111) 00:07:42.878 6704.837 - 6755.249: 11.3313% ( 96) 00:07:42.878 6755.249 - 6805.662: 11.9728% ( 101) 00:07:42.878 6805.662 - 6856.074: 12.4873% ( 81) 00:07:42.878 6856.074 - 6906.486: 12.9383% ( 71) 00:07:42.878 6906.486 - 6956.898: 13.3765% ( 69) 00:07:42.878 6956.898 - 7007.311: 13.9482% ( 90) 00:07:42.878 7007.311 - 7057.723: 14.5135% ( 89) 00:07:42.878 7057.723 - 7108.135: 15.1105% ( 94) 00:07:42.879 7108.135 - 7158.548: 15.8283% ( 113) 00:07:42.879 7158.548 - 7208.960: 16.5904% ( 120) 00:07:42.879 7208.960 - 7259.372: 17.5051% ( 144) 00:07:42.879 7259.372 - 7309.785: 18.5023% ( 157) 00:07:42.879 7309.785 - 7360.197: 19.7409% ( 195) 00:07:42.879 7360.197 - 7410.609: 21.2271% ( 234) 00:07:42.879 7410.609 - 7461.022: 23.1326% ( 300) 00:07:42.879 7461.022 - 7511.434: 25.5145% ( 375) 00:07:42.879 7511.434 - 7561.846: 28.0488% ( 399) 00:07:42.879 7561.846 - 7612.258: 30.8753% ( 445) 00:07:42.879 7612.258 - 7662.671: 34.0384% ( 498) 00:07:42.879 7662.671 - 7713.083: 37.3412% ( 520) 00:07:42.879 7713.083 - 7763.495: 40.8600% ( 554) 00:07:42.879 7763.495 - 7813.908: 44.6837% ( 602) 00:07:42.879 7813.908 - 7864.320: 48.5328% ( 606) 00:07:42.879 7864.320 - 7914.732: 52.2675% ( 588) 00:07:42.879 7914.732 - 7965.145: 56.0531% ( 596) 00:07:42.879 7965.145 - 8015.557: 59.7307% ( 579) 00:07:42.879 8015.557 - 8065.969: 63.3638% ( 572) 00:07:42.879 8065.969 - 8116.382: 66.9398% ( 563) 00:07:42.879 8116.382 - 8166.794: 70.2934% ( 528) 00:07:42.879 8166.794 - 8217.206: 73.4883% ( 503) 00:07:42.879 8217.206 - 8267.618: 76.4228% ( 462) 00:07:42.879 8267.618 - 8318.031: 79.0460% ( 413) 00:07:42.879 8318.031 - 8368.443: 81.3516% ( 363) 00:07:42.879 8368.443 - 8418.855: 83.3397% ( 313) 00:07:42.879 8418.855 - 8469.268: 84.9721% ( 257) 00:07:42.879 8469.268 - 8519.680: 86.3567% ( 218) 00:07:42.879 8519.680 - 8570.092: 87.4365% ( 170) 00:07:42.879 8570.092 - 8620.505: 88.3511% ( 144) 00:07:42.879 8620.505 - 8670.917: 89.0371% ( 108) 00:07:42.879 8670.917 - 8721.329: 89.5579% ( 82) 00:07:42.879 8721.329 - 8771.742: 90.0152% ( 72) 00:07:42.879 8771.742 - 8822.154: 90.3836% ( 58) 00:07:42.879 8822.154 - 8872.566: 90.6822% ( 47) 00:07:42.879 8872.566 - 8922.978: 90.9235% ( 38) 00:07:42.879 8922.978 - 8973.391: 91.1712% ( 39) 00:07:42.879 8973.391 - 9023.803: 91.3618% ( 30) 00:07:42.879 9023.803 - 9074.215: 91.4952% ( 21) 00:07:42.879 9074.215 - 9124.628: 91.6032% ( 17) 00:07:42.879 9124.628 - 9175.040: 91.7175% ( 18) 00:07:42.879 9175.040 - 9225.452: 91.7873% ( 11) 00:07:42.879 9225.452 - 9275.865: 91.8826% ( 15) 00:07:42.879 9275.865 - 9326.277: 91.9906% ( 17) 00:07:42.879 9326.277 - 9376.689: 92.0986% ( 17) 00:07:42.879 9376.689 - 9427.102: 92.2447% ( 23) 00:07:42.879 9427.102 - 9477.514: 92.3780% ( 21) 00:07:42.879 9477.514 - 9527.926: 92.5114% ( 21) 00:07:42.879 9527.926 - 9578.338: 92.6385% ( 20) 00:07:42.879 9578.338 - 9628.751: 92.7718% ( 21) 00:07:42.879 9628.751 - 9679.163: 92.8862% ( 18) 00:07:42.879 9679.163 - 9729.575: 93.0005% ( 18) 00:07:42.879 9729.575 - 9779.988: 93.1339% ( 21) 00:07:42.879 9779.988 - 9830.400: 93.2482% ( 18) 00:07:42.879 9830.400 - 9880.812: 93.3626% ( 18) 00:07:42.879 9880.812 - 9931.225: 93.5086% ( 23) 00:07:42.879 9931.225 - 9981.637: 93.6230% ( 18) 00:07:42.879 9981.637 - 10032.049: 93.7436% ( 19) 00:07:42.879 10032.049 - 10082.462: 93.8834% ( 22) 00:07:42.879 10082.462 - 10132.874: 94.0168% ( 21) 00:07:42.879 10132.874 - 10183.286: 94.1438% ( 20) 00:07:42.879 10183.286 - 10233.698: 94.2645% ( 19) 00:07:42.879 10233.698 - 10284.111: 94.3534% ( 14) 00:07:42.879 10284.111 - 10334.523: 94.4614% ( 17) 00:07:42.879 10334.523 - 10384.935: 94.5630% ( 16) 00:07:42.879 10384.935 - 10435.348: 94.6710% ( 17) 00:07:42.879 10435.348 - 10485.760: 94.7853% ( 18) 00:07:42.879 10485.760 - 10536.172: 94.8996% ( 18) 00:07:42.879 10536.172 - 10586.585: 95.0267% ( 20) 00:07:42.879 10586.585 - 10636.997: 95.1474% ( 19) 00:07:42.879 10636.997 - 10687.409: 95.2807% ( 21) 00:07:42.879 10687.409 - 10737.822: 95.3951% ( 18) 00:07:42.879 10737.822 - 10788.234: 95.5158% ( 19) 00:07:42.879 10788.234 - 10838.646: 95.6174% ( 16) 00:07:42.879 10838.646 - 10889.058: 95.7444% ( 20) 00:07:42.879 10889.058 - 10939.471: 95.8397% ( 15) 00:07:42.879 10939.471 - 10989.883: 95.9413% ( 16) 00:07:42.879 10989.883 - 11040.295: 96.0302% ( 14) 00:07:42.879 11040.295 - 11090.708: 96.1128% ( 13) 00:07:42.879 11090.708 - 11141.120: 96.1827% ( 11) 00:07:42.879 11141.120 - 11191.532: 96.2398% ( 9) 00:07:42.879 11191.532 - 11241.945: 96.2970% ( 9) 00:07:42.879 11241.945 - 11292.357: 96.3669% ( 11) 00:07:42.879 11292.357 - 11342.769: 96.4240% ( 9) 00:07:42.879 11342.769 - 11393.182: 96.4939% ( 11) 00:07:42.879 11393.182 - 11443.594: 96.5574% ( 10) 00:07:42.879 11443.594 - 11494.006: 96.6273% ( 11) 00:07:42.879 11494.006 - 11544.418: 96.6908% ( 10) 00:07:42.879 11544.418 - 11594.831: 96.7734% ( 13) 00:07:42.879 11594.831 - 11645.243: 96.8686% ( 15) 00:07:42.879 11645.243 - 11695.655: 96.9449% ( 12) 00:07:42.879 11695.655 - 11746.068: 97.0147% ( 11) 00:07:42.879 11746.068 - 11796.480: 97.0846% ( 11) 00:07:42.879 11796.480 - 11846.892: 97.1799% ( 15) 00:07:42.879 11846.892 - 11897.305: 97.2434% ( 10) 00:07:42.879 11897.305 - 11947.717: 97.3514% ( 17) 00:07:42.879 11947.717 - 11998.129: 97.4403% ( 14) 00:07:42.879 11998.129 - 12048.542: 97.5356% ( 15) 00:07:42.879 12048.542 - 12098.954: 97.6372% ( 16) 00:07:42.879 12098.954 - 12149.366: 97.7198% ( 13) 00:07:42.879 12149.366 - 12199.778: 97.8150% ( 15) 00:07:42.879 12199.778 - 12250.191: 97.9103% ( 15) 00:07:42.879 12250.191 - 12300.603: 97.9929% ( 13) 00:07:42.879 12300.603 - 12351.015: 98.0628% ( 11) 00:07:42.879 12351.015 - 12401.428: 98.1072% ( 7) 00:07:42.879 12401.428 - 12451.840: 98.1453% ( 6) 00:07:42.879 12451.840 - 12502.252: 98.1898% ( 7) 00:07:42.879 12502.252 - 12552.665: 98.2406% ( 8) 00:07:42.879 12552.665 - 12603.077: 98.2851% ( 7) 00:07:42.879 12603.077 - 12653.489: 98.3232% ( 6) 00:07:42.879 12653.489 - 12703.902: 98.3740% ( 8) 00:07:42.879 12703.902 - 12754.314: 98.4057% ( 5) 00:07:42.879 12754.314 - 12804.726: 98.4311% ( 4) 00:07:42.879 12804.726 - 12855.138: 98.4566% ( 4) 00:07:42.879 12855.138 - 12905.551: 98.4820% ( 4) 00:07:42.879 12905.551 - 13006.375: 98.5328% ( 8) 00:07:42.879 13006.375 - 13107.200: 98.5836% ( 8) 00:07:42.879 13107.200 - 13208.025: 98.6408% ( 9) 00:07:42.879 13208.025 - 13308.849: 98.6852% ( 7) 00:07:42.879 13308.849 - 13409.674: 98.7170% ( 5) 00:07:42.879 13409.674 - 13510.498: 98.7424% ( 4) 00:07:42.879 13510.498 - 13611.323: 98.7678% ( 4) 00:07:42.879 13611.323 - 13712.148: 98.7805% ( 2) 00:07:42.879 13812.972 - 13913.797: 98.8122% ( 5) 00:07:42.879 13913.797 - 14014.622: 98.8567% ( 7) 00:07:42.879 14014.622 - 14115.446: 98.9075% ( 8) 00:07:42.879 14115.446 - 14216.271: 98.9393% ( 5) 00:07:42.879 14216.271 - 14317.095: 98.9647% ( 4) 00:07:42.879 14317.095 - 14417.920: 98.9964% ( 5) 00:07:42.879 14417.920 - 14518.745: 99.0218% ( 4) 00:07:42.879 14518.745 - 14619.569: 99.0409% ( 3) 00:07:42.879 14619.569 - 14720.394: 99.0663% ( 4) 00:07:42.879 14720.394 - 14821.218: 99.0917% ( 4) 00:07:42.879 14821.218 - 14922.043: 99.1235% ( 5) 00:07:42.879 14922.043 - 15022.868: 99.1743% ( 8) 00:07:42.879 15022.868 - 15123.692: 99.1870% ( 2) 00:07:42.879 24399.557 - 24500.382: 99.1997% ( 2) 00:07:42.879 24500.382 - 24601.206: 99.2124% ( 2) 00:07:42.879 24601.206 - 24702.031: 99.2251% ( 2) 00:07:42.879 24702.031 - 24802.855: 99.2442% ( 3) 00:07:42.879 24802.855 - 24903.680: 99.2632% ( 3) 00:07:42.879 24903.680 - 25004.505: 99.2950% ( 5) 00:07:42.879 25004.505 - 25105.329: 99.3204% ( 4) 00:07:42.879 25105.329 - 25206.154: 99.3458% ( 4) 00:07:42.879 25206.154 - 25306.978: 99.3712% ( 4) 00:07:42.879 25306.978 - 25407.803: 99.3966% ( 4) 00:07:42.879 25407.803 - 25508.628: 99.4284% ( 5) 00:07:42.879 25508.628 - 25609.452: 99.4538% ( 4) 00:07:42.879 25609.452 - 25710.277: 99.4792% ( 4) 00:07:42.879 25710.277 - 25811.102: 99.5046% ( 4) 00:07:42.879 25811.102 - 26012.751: 99.5554% ( 8) 00:07:42.879 26012.751 - 26214.400: 99.5935% ( 6) 00:07:42.879 31053.982 - 31255.631: 99.6316% ( 6) 00:07:42.879 31255.631 - 31457.280: 99.6824% ( 8) 00:07:42.879 31457.280 - 31658.929: 99.7332% ( 8) 00:07:42.879 31658.929 - 31860.578: 99.7840% ( 8) 00:07:42.879 31860.578 - 32062.228: 99.8349% ( 8) 00:07:42.879 32062.228 - 32263.877: 99.8857% ( 8) 00:07:42.879 32263.877 - 32465.526: 99.9428% ( 9) 00:07:42.879 32465.526 - 32667.175: 99.9936% ( 8) 00:07:42.879 32667.175 - 32868.825: 100.0000% ( 1) 00:07:42.879 00:07:42.879 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:42.879 ============================================================================== 00:07:42.879 Range in us Cumulative IO count 00:07:42.879 5747.003 - 5772.209: 0.0064% ( 1) 00:07:42.879 5772.209 - 5797.415: 0.0191% ( 2) 00:07:42.879 5797.415 - 5822.622: 0.0635% ( 7) 00:07:42.879 5822.622 - 5847.828: 0.1016% ( 6) 00:07:42.879 5847.828 - 5873.034: 0.1588% ( 9) 00:07:42.879 5873.034 - 5898.240: 0.3112% ( 24) 00:07:42.880 5898.240 - 5923.446: 0.5018% ( 30) 00:07:42.880 5923.446 - 5948.652: 0.7368% ( 37) 00:07:42.880 5948.652 - 5973.858: 1.0099% ( 43) 00:07:42.880 5973.858 - 5999.065: 1.2703% ( 41) 00:07:42.880 5999.065 - 6024.271: 1.4799% ( 33) 00:07:42.880 6024.271 - 6049.477: 1.7403% ( 41) 00:07:42.880 6049.477 - 6074.683: 2.0452% ( 48) 00:07:42.880 6074.683 - 6099.889: 2.3882% ( 54) 00:07:42.880 6099.889 - 6125.095: 2.6550% ( 42) 00:07:42.880 6125.095 - 6150.302: 2.9345% ( 44) 00:07:42.880 6150.302 - 6175.508: 3.2139% ( 44) 00:07:42.880 6175.508 - 6200.714: 3.4934% ( 44) 00:07:42.880 6200.714 - 6225.920: 3.8364% ( 54) 00:07:42.880 6225.920 - 6251.126: 4.1730% ( 53) 00:07:42.880 6251.126 - 6276.332: 4.5033% ( 52) 00:07:42.880 6276.332 - 6301.538: 4.8463% ( 54) 00:07:42.880 6301.538 - 6326.745: 5.1829% ( 53) 00:07:42.880 6326.745 - 6351.951: 5.5386% ( 56) 00:07:42.880 6351.951 - 6377.157: 5.8562% ( 50) 00:07:42.880 6377.157 - 6402.363: 6.1928% ( 53) 00:07:42.880 6402.363 - 6427.569: 6.5549% ( 57) 00:07:42.880 6427.569 - 6452.775: 6.9042% ( 55) 00:07:42.880 6452.775 - 6503.188: 7.7490% ( 133) 00:07:42.880 6503.188 - 6553.600: 8.5429% ( 125) 00:07:42.880 6553.600 - 6604.012: 9.3242% ( 123) 00:07:42.880 6604.012 - 6654.425: 10.0800% ( 119) 00:07:42.880 6654.425 - 6704.837: 10.8168% ( 116) 00:07:42.880 6704.837 - 6755.249: 11.4012% ( 92) 00:07:42.880 6755.249 - 6805.662: 11.9029% ( 79) 00:07:42.880 6805.662 - 6856.074: 12.3730% ( 74) 00:07:42.880 6856.074 - 6906.486: 12.8747% ( 79) 00:07:42.880 6906.486 - 6956.898: 13.3194% ( 70) 00:07:42.880 6956.898 - 7007.311: 13.8084% ( 77) 00:07:42.880 7007.311 - 7057.723: 14.3991% ( 93) 00:07:42.880 7057.723 - 7108.135: 14.9009% ( 79) 00:07:42.880 7108.135 - 7158.548: 15.4853% ( 92) 00:07:42.880 7158.548 - 7208.960: 16.2602% ( 122) 00:07:42.880 7208.960 - 7259.372: 17.1430% ( 139) 00:07:42.880 7259.372 - 7309.785: 18.0450% ( 142) 00:07:42.880 7309.785 - 7360.197: 19.2581% ( 191) 00:07:42.880 7360.197 - 7410.609: 20.7635% ( 237) 00:07:42.880 7410.609 - 7461.022: 22.4975% ( 273) 00:07:42.880 7461.022 - 7511.434: 24.7523% ( 355) 00:07:42.880 7511.434 - 7561.846: 27.2802% ( 398) 00:07:42.880 7561.846 - 7612.258: 30.2083% ( 461) 00:07:42.880 7612.258 - 7662.671: 33.3587% ( 496) 00:07:42.880 7662.671 - 7713.083: 36.7886% ( 540) 00:07:42.880 7713.083 - 7763.495: 40.3011% ( 553) 00:07:42.880 7763.495 - 7813.908: 44.0168% ( 585) 00:07:42.880 7813.908 - 7864.320: 47.7642% ( 590) 00:07:42.880 7864.320 - 7914.732: 51.7658% ( 630) 00:07:42.880 7914.732 - 7965.145: 55.6148% ( 606) 00:07:42.880 7965.145 - 8015.557: 59.4258% ( 600) 00:07:42.880 8015.557 - 8065.969: 63.1479% ( 586) 00:07:42.880 8065.969 - 8116.382: 66.7619% ( 569) 00:07:42.880 8116.382 - 8166.794: 70.2490% ( 549) 00:07:42.880 8166.794 - 8217.206: 73.4820% ( 509) 00:07:42.880 8217.206 - 8267.618: 76.3974% ( 459) 00:07:42.880 8267.618 - 8318.031: 78.9380% ( 400) 00:07:42.880 8318.031 - 8368.443: 81.1547% ( 349) 00:07:42.880 8368.443 - 8418.855: 83.1047% ( 307) 00:07:42.880 8418.855 - 8469.268: 84.7180% ( 254) 00:07:42.880 8469.268 - 8519.680: 86.0518% ( 210) 00:07:42.880 8519.680 - 8570.092: 87.1443% ( 172) 00:07:42.880 8570.092 - 8620.505: 88.0272% ( 139) 00:07:42.880 8620.505 - 8670.917: 88.7195% ( 109) 00:07:42.880 8670.917 - 8721.329: 89.2594% ( 85) 00:07:42.880 8721.329 - 8771.742: 89.6977% ( 69) 00:07:42.880 8771.742 - 8822.154: 90.0343% ( 53) 00:07:42.880 8822.154 - 8872.566: 90.3328% ( 47) 00:07:42.880 8872.566 - 8922.978: 90.6250% ( 46) 00:07:42.880 8922.978 - 8973.391: 90.8727% ( 39) 00:07:42.880 8973.391 - 9023.803: 91.1141% ( 38) 00:07:42.880 9023.803 - 9074.215: 91.3554% ( 38) 00:07:42.880 9074.215 - 9124.628: 91.5650% ( 33) 00:07:42.880 9124.628 - 9175.040: 91.7556% ( 30) 00:07:42.880 9175.040 - 9225.452: 91.9271% ( 27) 00:07:42.880 9225.452 - 9275.865: 92.0859% ( 25) 00:07:42.880 9275.865 - 9326.277: 92.2447% ( 25) 00:07:42.880 9326.277 - 9376.689: 92.3971% ( 24) 00:07:42.880 9376.689 - 9427.102: 92.5305% ( 21) 00:07:42.880 9427.102 - 9477.514: 92.6639% ( 21) 00:07:42.880 9477.514 - 9527.926: 92.8354% ( 27) 00:07:42.880 9527.926 - 9578.338: 92.9878% ( 24) 00:07:42.880 9578.338 - 9628.751: 93.1148% ( 20) 00:07:42.880 9628.751 - 9679.163: 93.2673% ( 24) 00:07:42.880 9679.163 - 9729.575: 93.3943% ( 20) 00:07:42.880 9729.575 - 9779.988: 93.4896% ( 15) 00:07:42.880 9779.988 - 9830.400: 93.5658% ( 12) 00:07:42.880 9830.400 - 9880.812: 93.6293% ( 10) 00:07:42.880 9880.812 - 9931.225: 93.7373% ( 17) 00:07:42.880 9931.225 - 9981.637: 93.8326% ( 15) 00:07:42.880 9981.637 - 10032.049: 93.9088% ( 12) 00:07:42.880 10032.049 - 10082.462: 93.9660% ( 9) 00:07:42.880 10082.462 - 10132.874: 94.0485% ( 13) 00:07:42.880 10132.874 - 10183.286: 94.1438% ( 15) 00:07:42.880 10183.286 - 10233.698: 94.2391% ( 15) 00:07:42.880 10233.698 - 10284.111: 94.3280% ( 14) 00:07:42.880 10284.111 - 10334.523: 94.3852% ( 9) 00:07:42.880 10334.523 - 10384.935: 94.4487% ( 10) 00:07:42.880 10384.935 - 10435.348: 94.5185% ( 11) 00:07:42.880 10435.348 - 10485.760: 94.5948% ( 12) 00:07:42.880 10485.760 - 10536.172: 94.6900% ( 15) 00:07:42.880 10536.172 - 10586.585: 94.7853% ( 15) 00:07:42.880 10586.585 - 10636.997: 94.9060% ( 19) 00:07:42.880 10636.997 - 10687.409: 95.0330% ( 20) 00:07:42.880 10687.409 - 10737.822: 95.1855% ( 24) 00:07:42.880 10737.822 - 10788.234: 95.3061% ( 19) 00:07:42.880 10788.234 - 10838.646: 95.4332% ( 20) 00:07:42.880 10838.646 - 10889.058: 95.5348% ( 16) 00:07:42.880 10889.058 - 10939.471: 95.6301% ( 15) 00:07:42.880 10939.471 - 10989.883: 95.7254% ( 15) 00:07:42.880 10989.883 - 11040.295: 95.8206% ( 15) 00:07:42.880 11040.295 - 11090.708: 95.9159% ( 15) 00:07:42.880 11090.708 - 11141.120: 96.0302% ( 18) 00:07:42.880 11141.120 - 11191.532: 96.1255% ( 15) 00:07:42.880 11191.532 - 11241.945: 96.2081% ( 13) 00:07:42.880 11241.945 - 11292.357: 96.2970% ( 14) 00:07:42.880 11292.357 - 11342.769: 96.4050% ( 17) 00:07:42.880 11342.769 - 11393.182: 96.5066% ( 16) 00:07:42.880 11393.182 - 11443.594: 96.6209% ( 18) 00:07:42.880 11443.594 - 11494.006: 96.7416% ( 19) 00:07:42.880 11494.006 - 11544.418: 96.8242% ( 13) 00:07:42.880 11544.418 - 11594.831: 96.9385% ( 18) 00:07:42.880 11594.831 - 11645.243: 97.0592% ( 19) 00:07:42.880 11645.243 - 11695.655: 97.1799% ( 19) 00:07:42.880 11695.655 - 11746.068: 97.2688% ( 14) 00:07:42.880 11746.068 - 11796.480: 97.3704% ( 16) 00:07:42.880 11796.480 - 11846.892: 97.4848% ( 18) 00:07:42.880 11846.892 - 11897.305: 97.5800% ( 15) 00:07:42.880 11897.305 - 11947.717: 97.6626% ( 13) 00:07:42.880 11947.717 - 11998.129: 97.7706% ( 17) 00:07:42.880 11998.129 - 12048.542: 97.8532% ( 13) 00:07:42.880 12048.542 - 12098.954: 97.9357% ( 13) 00:07:42.880 12098.954 - 12149.366: 98.0119% ( 12) 00:07:42.880 12149.366 - 12199.778: 98.0882% ( 12) 00:07:42.880 12199.778 - 12250.191: 98.1580% ( 11) 00:07:42.880 12250.191 - 12300.603: 98.2152% ( 9) 00:07:42.880 12300.603 - 12351.015: 98.2851% ( 11) 00:07:42.880 12351.015 - 12401.428: 98.3486% ( 10) 00:07:42.880 12401.428 - 12451.840: 98.4121% ( 10) 00:07:42.880 12451.840 - 12502.252: 98.4629% ( 8) 00:07:42.880 12502.252 - 12552.665: 98.5137% ( 8) 00:07:42.880 12552.665 - 12603.077: 98.5582% ( 7) 00:07:42.880 12603.077 - 12653.489: 98.5899% ( 5) 00:07:42.880 12653.489 - 12703.902: 98.6153% ( 4) 00:07:42.880 12703.902 - 12754.314: 98.6408% ( 4) 00:07:42.880 12754.314 - 12804.726: 98.6662% ( 4) 00:07:42.880 12804.726 - 12855.138: 98.6916% ( 4) 00:07:42.880 12855.138 - 12905.551: 98.7170% ( 4) 00:07:42.880 12905.551 - 13006.375: 98.7487% ( 5) 00:07:42.880 13006.375 - 13107.200: 98.7741% ( 4) 00:07:42.880 13107.200 - 13208.025: 98.7932% ( 3) 00:07:42.880 13208.025 - 13308.849: 98.8186% ( 4) 00:07:42.880 13308.849 - 13409.674: 98.8440% ( 4) 00:07:42.880 13409.674 - 13510.498: 98.8694% ( 4) 00:07:42.880 13510.498 - 13611.323: 98.8948% ( 4) 00:07:42.880 13611.323 - 13712.148: 98.9202% ( 4) 00:07:42.880 13712.148 - 13812.972: 98.9456% ( 4) 00:07:42.880 13812.972 - 13913.797: 98.9710% ( 4) 00:07:42.880 13913.797 - 14014.622: 98.9964% ( 4) 00:07:42.880 14014.622 - 14115.446: 99.0218% ( 4) 00:07:42.880 14115.446 - 14216.271: 99.0536% ( 5) 00:07:42.880 14216.271 - 14317.095: 99.0790% ( 4) 00:07:42.880 14317.095 - 14417.920: 99.1044% ( 4) 00:07:42.880 14417.920 - 14518.745: 99.1298% ( 4) 00:07:42.880 14518.745 - 14619.569: 99.1552% ( 4) 00:07:42.880 14619.569 - 14720.394: 99.1806% ( 4) 00:07:42.881 14720.394 - 14821.218: 99.1870% ( 1) 00:07:42.881 22988.012 - 23088.837: 99.1997% ( 2) 00:07:42.881 23088.837 - 23189.662: 99.2251% ( 4) 00:07:42.881 23189.662 - 23290.486: 99.2505% ( 4) 00:07:42.881 23290.486 - 23391.311: 99.2759% ( 4) 00:07:42.881 23391.311 - 23492.135: 99.3013% ( 4) 00:07:42.881 23492.135 - 23592.960: 99.3267% ( 4) 00:07:42.881 23592.960 - 23693.785: 99.3521% ( 4) 00:07:42.881 23693.785 - 23794.609: 99.3839% ( 5) 00:07:42.881 23794.609 - 23895.434: 99.4093% ( 4) 00:07:42.881 23895.434 - 23996.258: 99.4347% ( 4) 00:07:42.881 23996.258 - 24097.083: 99.4665% ( 5) 00:07:42.881 24097.083 - 24197.908: 99.4919% ( 4) 00:07:42.881 24197.908 - 24298.732: 99.5173% ( 4) 00:07:42.881 24298.732 - 24399.557: 99.5427% ( 4) 00:07:42.881 24399.557 - 24500.382: 99.5681% ( 4) 00:07:42.881 24500.382 - 24601.206: 99.5935% ( 4) 00:07:42.881 29239.138 - 29440.788: 99.6062% ( 2) 00:07:42.881 29440.788 - 29642.437: 99.6570% ( 8) 00:07:42.881 29642.437 - 29844.086: 99.7078% ( 8) 00:07:42.881 29844.086 - 30045.735: 99.7586% ( 8) 00:07:42.881 30045.735 - 30247.385: 99.8158% ( 9) 00:07:42.881 30247.385 - 30449.034: 99.8666% ( 8) 00:07:42.881 30449.034 - 30650.683: 99.9174% ( 8) 00:07:42.881 30650.683 - 30852.332: 99.9682% ( 8) 00:07:42.881 30852.332 - 31053.982: 100.0000% ( 5) 00:07:42.881 00:07:42.881 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:42.881 ============================================================================== 00:07:42.881 Range in us Cumulative IO count 00:07:42.881 5797.415 - 5822.622: 0.0254% ( 4) 00:07:42.881 5822.622 - 5847.828: 0.1207% ( 15) 00:07:42.881 5847.828 - 5873.034: 0.2731% ( 24) 00:07:42.881 5873.034 - 5898.240: 0.3811% ( 17) 00:07:42.881 5898.240 - 5923.446: 0.5589% ( 28) 00:07:42.881 5923.446 - 5948.652: 0.7368% ( 28) 00:07:42.881 5948.652 - 5973.858: 0.9464% ( 33) 00:07:42.881 5973.858 - 5999.065: 1.1814% ( 37) 00:07:42.881 5999.065 - 6024.271: 1.5816% ( 63) 00:07:42.881 6024.271 - 6049.477: 1.8928% ( 49) 00:07:42.881 6049.477 - 6074.683: 2.1659% ( 43) 00:07:42.881 6074.683 - 6099.889: 2.4073% ( 38) 00:07:42.881 6099.889 - 6125.095: 2.6296% ( 35) 00:07:42.881 6125.095 - 6150.302: 2.9281% ( 47) 00:07:42.881 6150.302 - 6175.508: 3.2139% ( 45) 00:07:42.881 6175.508 - 6200.714: 3.5188% ( 48) 00:07:42.881 6200.714 - 6225.920: 3.8427% ( 51) 00:07:42.881 6225.920 - 6251.126: 4.2302% ( 61) 00:07:42.881 6251.126 - 6276.332: 4.5160% ( 45) 00:07:42.881 6276.332 - 6301.538: 4.8399% ( 51) 00:07:42.881 6301.538 - 6326.745: 5.1702% ( 52) 00:07:42.881 6326.745 - 6351.951: 5.5069% ( 53) 00:07:42.881 6351.951 - 6377.157: 5.8626% ( 56) 00:07:42.881 6377.157 - 6402.363: 6.1801% ( 50) 00:07:42.881 6402.363 - 6427.569: 6.5422% ( 57) 00:07:42.881 6427.569 - 6452.775: 6.8979% ( 56) 00:07:42.881 6452.775 - 6503.188: 7.6347% ( 116) 00:07:42.881 6503.188 - 6553.600: 8.4159% ( 123) 00:07:42.881 6553.600 - 6604.012: 9.1590% ( 117) 00:07:42.881 6604.012 - 6654.425: 9.9848% ( 130) 00:07:42.881 6654.425 - 6704.837: 10.7279% ( 117) 00:07:42.881 6704.837 - 6755.249: 11.3948% ( 105) 00:07:42.881 6755.249 - 6805.662: 11.9601% ( 89) 00:07:42.881 6805.662 - 6856.074: 12.4492% ( 77) 00:07:42.881 6856.074 - 6906.486: 12.9383% ( 77) 00:07:42.881 6906.486 - 6956.898: 13.4337% ( 78) 00:07:42.881 6956.898 - 7007.311: 13.9228% ( 77) 00:07:42.881 7007.311 - 7057.723: 14.3801% ( 72) 00:07:42.881 7057.723 - 7108.135: 14.8692% ( 77) 00:07:42.881 7108.135 - 7158.548: 15.4090% ( 85) 00:07:42.881 7158.548 - 7208.960: 16.1141% ( 111) 00:07:42.881 7208.960 - 7259.372: 16.9906% ( 138) 00:07:42.881 7259.372 - 7309.785: 17.8989% ( 143) 00:07:42.881 7309.785 - 7360.197: 19.1374% ( 195) 00:07:42.881 7360.197 - 7410.609: 20.6555% ( 239) 00:07:42.881 7410.609 - 7461.022: 22.4339% ( 280) 00:07:42.881 7461.022 - 7511.434: 24.4538% ( 318) 00:07:42.881 7511.434 - 7561.846: 27.1278% ( 421) 00:07:42.881 7561.846 - 7612.258: 29.8844% ( 434) 00:07:42.881 7612.258 - 7662.671: 32.9332% ( 480) 00:07:42.881 7662.671 - 7713.083: 36.3440% ( 537) 00:07:42.881 7713.083 - 7763.495: 39.9454% ( 567) 00:07:42.881 7763.495 - 7813.908: 43.6801% ( 588) 00:07:42.881 7813.908 - 7864.320: 47.4975% ( 601) 00:07:42.881 7864.320 - 7914.732: 51.2894% ( 597) 00:07:42.881 7914.732 - 7965.145: 55.2464% ( 623) 00:07:42.881 7965.145 - 8015.557: 59.1527% ( 615) 00:07:42.881 8015.557 - 8065.969: 62.8366% ( 580) 00:07:42.881 8065.969 - 8116.382: 66.4698% ( 572) 00:07:42.881 8116.382 - 8166.794: 69.8869% ( 538) 00:07:42.881 8166.794 - 8217.206: 73.1136% ( 508) 00:07:42.881 8217.206 - 8267.618: 76.0163% ( 457) 00:07:42.881 8267.618 - 8318.031: 78.6141% ( 409) 00:07:42.881 8318.031 - 8368.443: 80.8816% ( 357) 00:07:42.881 8368.443 - 8418.855: 82.7871% ( 300) 00:07:42.881 8418.855 - 8469.268: 84.3115% ( 240) 00:07:42.881 8469.268 - 8519.680: 85.6390% ( 209) 00:07:42.881 8519.680 - 8570.092: 86.6743% ( 163) 00:07:42.881 8570.092 - 8620.505: 87.5191% ( 133) 00:07:42.881 8620.505 - 8670.917: 88.2431% ( 114) 00:07:42.881 8670.917 - 8721.329: 88.9228% ( 107) 00:07:42.881 8721.329 - 8771.742: 89.4372% ( 81) 00:07:42.881 8771.742 - 8822.154: 89.7929% ( 56) 00:07:42.881 8822.154 - 8872.566: 90.1232% ( 52) 00:07:42.881 8872.566 - 8922.978: 90.4345% ( 49) 00:07:42.881 8922.978 - 8973.391: 90.7203% ( 45) 00:07:42.881 8973.391 - 9023.803: 90.9807% ( 41) 00:07:42.881 9023.803 - 9074.215: 91.2093% ( 36) 00:07:42.881 9074.215 - 9124.628: 91.4380% ( 36) 00:07:42.881 9124.628 - 9175.040: 91.6349% ( 31) 00:07:42.881 9175.040 - 9225.452: 91.8445% ( 33) 00:07:42.881 9225.452 - 9275.865: 92.0732% ( 36) 00:07:42.881 9275.865 - 9326.277: 92.2510% ( 28) 00:07:42.881 9326.277 - 9376.689: 92.4416% ( 30) 00:07:42.881 9376.689 - 9427.102: 92.6004% ( 25) 00:07:42.881 9427.102 - 9477.514: 92.7591% ( 25) 00:07:42.881 9477.514 - 9527.926: 92.8925% ( 21) 00:07:42.881 9527.926 - 9578.338: 93.0196% ( 20) 00:07:42.881 9578.338 - 9628.751: 93.1402% ( 19) 00:07:42.881 9628.751 - 9679.163: 93.2609% ( 19) 00:07:42.881 9679.163 - 9729.575: 93.3816% ( 19) 00:07:42.881 9729.575 - 9779.988: 93.4769% ( 15) 00:07:42.881 9779.988 - 9830.400: 93.5658% ( 14) 00:07:42.881 9830.400 - 9880.812: 93.6611% ( 15) 00:07:42.881 9880.812 - 9931.225: 93.8135% ( 24) 00:07:42.881 9931.225 - 9981.637: 93.9469% ( 21) 00:07:42.881 9981.637 - 10032.049: 94.0866% ( 22) 00:07:42.881 10032.049 - 10082.462: 94.1946% ( 17) 00:07:42.881 10082.462 - 10132.874: 94.3026% ( 17) 00:07:42.881 10132.874 - 10183.286: 94.4106% ( 17) 00:07:42.881 10183.286 - 10233.698: 94.4995% ( 14) 00:07:42.881 10233.698 - 10284.111: 94.5948% ( 15) 00:07:42.881 10284.111 - 10334.523: 94.6900% ( 15) 00:07:42.881 10334.523 - 10384.935: 94.7790% ( 14) 00:07:42.881 10384.935 - 10435.348: 94.8488% ( 11) 00:07:42.881 10435.348 - 10485.760: 94.9378% ( 14) 00:07:42.881 10485.760 - 10536.172: 95.0267% ( 14) 00:07:42.881 10536.172 - 10586.585: 95.1220% ( 15) 00:07:42.881 10586.585 - 10636.997: 95.1982% ( 12) 00:07:42.881 10636.997 - 10687.409: 95.3061% ( 17) 00:07:42.881 10687.409 - 10737.822: 95.4141% ( 17) 00:07:42.881 10737.822 - 10788.234: 95.5221% ( 17) 00:07:42.881 10788.234 - 10838.646: 95.6047% ( 13) 00:07:42.881 10838.646 - 10889.058: 95.6872% ( 13) 00:07:42.881 10889.058 - 10939.471: 95.7762% ( 14) 00:07:42.881 10939.471 - 10989.883: 95.9032% ( 20) 00:07:42.881 10989.883 - 11040.295: 95.9921% ( 14) 00:07:42.881 11040.295 - 11090.708: 96.0938% ( 16) 00:07:42.881 11090.708 - 11141.120: 96.1827% ( 14) 00:07:42.882 11141.120 - 11191.532: 96.2907% ( 17) 00:07:42.882 11191.532 - 11241.945: 96.4113% ( 19) 00:07:42.882 11241.945 - 11292.357: 96.5384% ( 20) 00:07:42.882 11292.357 - 11342.769: 96.6717% ( 21) 00:07:42.882 11342.769 - 11393.182: 96.7797% ( 17) 00:07:42.882 11393.182 - 11443.594: 96.8941% ( 18) 00:07:42.882 11443.594 - 11494.006: 96.9893% ( 15) 00:07:42.882 11494.006 - 11544.418: 97.0846% ( 15) 00:07:42.882 11544.418 - 11594.831: 97.1862% ( 16) 00:07:42.882 11594.831 - 11645.243: 97.2942% ( 17) 00:07:42.882 11645.243 - 11695.655: 97.4085% ( 18) 00:07:42.882 11695.655 - 11746.068: 97.4975% ( 14) 00:07:42.882 11746.068 - 11796.480: 97.5673% ( 11) 00:07:42.882 11796.480 - 11846.892: 97.6245% ( 9) 00:07:42.882 11846.892 - 11897.305: 97.6880% ( 10) 00:07:42.882 11897.305 - 11947.717: 97.7769% ( 14) 00:07:42.882 11947.717 - 11998.129: 97.8532% ( 12) 00:07:42.882 11998.129 - 12048.542: 97.9230% ( 11) 00:07:42.882 12048.542 - 12098.954: 97.9738% ( 8) 00:07:42.882 12098.954 - 12149.366: 98.0437% ( 11) 00:07:42.882 12149.366 - 12199.778: 98.1072% ( 10) 00:07:42.882 12199.778 - 12250.191: 98.1771% ( 11) 00:07:42.882 12250.191 - 12300.603: 98.2279% ( 8) 00:07:42.882 12300.603 - 12351.015: 98.2851% ( 9) 00:07:42.882 12351.015 - 12401.428: 98.3486% ( 10) 00:07:42.882 12401.428 - 12451.840: 98.3867% ( 6) 00:07:42.882 12451.840 - 12502.252: 98.4311% ( 7) 00:07:42.882 12502.252 - 12552.665: 98.4502% ( 3) 00:07:42.882 12552.665 - 12603.077: 98.4693% ( 3) 00:07:42.882 12603.077 - 12653.489: 98.4947% ( 4) 00:07:42.882 12653.489 - 12703.902: 98.5201% ( 4) 00:07:42.882 12703.902 - 12754.314: 98.5455% ( 4) 00:07:42.882 12754.314 - 12804.726: 98.5899% ( 7) 00:07:42.882 12804.726 - 12855.138: 98.6471% ( 9) 00:07:42.882 12855.138 - 12905.551: 98.6662% ( 3) 00:07:42.882 12905.551 - 13006.375: 98.7170% ( 8) 00:07:42.882 13006.375 - 13107.200: 98.7678% ( 8) 00:07:42.882 13107.200 - 13208.025: 98.8186% ( 8) 00:07:42.882 13208.025 - 13308.849: 98.8694% ( 8) 00:07:42.882 13308.849 - 13409.674: 98.9202% ( 8) 00:07:42.882 13409.674 - 13510.498: 98.9774% ( 9) 00:07:42.882 13510.498 - 13611.323: 99.0091% ( 5) 00:07:42.882 13611.323 - 13712.148: 99.0346% ( 4) 00:07:42.882 13712.148 - 13812.972: 99.0600% ( 4) 00:07:42.882 13812.972 - 13913.797: 99.0854% ( 4) 00:07:42.882 13913.797 - 14014.622: 99.1108% ( 4) 00:07:42.882 14014.622 - 14115.446: 99.1362% ( 4) 00:07:42.882 14115.446 - 14216.271: 99.1616% ( 4) 00:07:42.882 14216.271 - 14317.095: 99.1870% ( 4) 00:07:42.882 21576.468 - 21677.292: 99.2060% ( 3) 00:07:42.882 21677.292 - 21778.117: 99.2315% ( 4) 00:07:42.882 21778.117 - 21878.942: 99.2505% ( 3) 00:07:42.882 21878.942 - 21979.766: 99.2759% ( 4) 00:07:42.882 21979.766 - 22080.591: 99.3013% ( 4) 00:07:42.882 22080.591 - 22181.415: 99.3267% ( 4) 00:07:42.882 22181.415 - 22282.240: 99.3585% ( 5) 00:07:42.882 22282.240 - 22383.065: 99.3839% ( 4) 00:07:42.882 22383.065 - 22483.889: 99.4029% ( 3) 00:07:42.882 22483.889 - 22584.714: 99.4284% ( 4) 00:07:42.882 22584.714 - 22685.538: 99.4601% ( 5) 00:07:42.882 22685.538 - 22786.363: 99.4855% ( 4) 00:07:42.882 22786.363 - 22887.188: 99.5109% ( 4) 00:07:42.882 22887.188 - 22988.012: 99.5363% ( 4) 00:07:42.882 22988.012 - 23088.837: 99.5617% ( 4) 00:07:42.882 23088.837 - 23189.662: 99.5935% ( 5) 00:07:42.882 27625.945 - 27827.594: 99.6189% ( 4) 00:07:42.882 27827.594 - 28029.243: 99.6761% ( 9) 00:07:42.882 28029.243 - 28230.892: 99.7269% ( 8) 00:07:42.882 28230.892 - 28432.542: 99.7777% ( 8) 00:07:42.882 28432.542 - 28634.191: 99.8285% ( 8) 00:07:42.882 28634.191 - 28835.840: 99.8730% ( 7) 00:07:42.882 28835.840 - 29037.489: 99.9238% ( 8) 00:07:42.882 29037.489 - 29239.138: 99.9746% ( 8) 00:07:42.882 29239.138 - 29440.788: 100.0000% ( 4) 00:07:42.882 00:07:42.882 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:42.882 ============================================================================== 00:07:42.882 Range in us Cumulative IO count 00:07:42.882 5747.003 - 5772.209: 0.0063% ( 1) 00:07:42.882 5772.209 - 5797.415: 0.0380% ( 5) 00:07:42.882 5797.415 - 5822.622: 0.0886% ( 8) 00:07:42.882 5822.622 - 5847.828: 0.1455% ( 9) 00:07:42.882 5847.828 - 5873.034: 0.2404% ( 15) 00:07:42.882 5873.034 - 5898.240: 0.3416% ( 16) 00:07:42.882 5898.240 - 5923.446: 0.4934% ( 24) 00:07:42.882 5923.446 - 5948.652: 0.7148% ( 35) 00:07:42.882 5948.652 - 5973.858: 0.9552% ( 38) 00:07:42.882 5973.858 - 5999.065: 1.2715% ( 50) 00:07:42.882 5999.065 - 6024.271: 1.5309% ( 41) 00:07:42.882 6024.271 - 6049.477: 1.8408% ( 49) 00:07:42.882 6049.477 - 6074.683: 2.0939% ( 40) 00:07:42.882 6074.683 - 6099.889: 2.3785% ( 45) 00:07:42.882 6099.889 - 6125.095: 2.6948% ( 50) 00:07:42.882 6125.095 - 6150.302: 3.0364% ( 54) 00:07:42.882 6150.302 - 6175.508: 3.3338% ( 47) 00:07:42.882 6175.508 - 6200.714: 3.6374% ( 48) 00:07:42.882 6200.714 - 6225.920: 3.9410% ( 48) 00:07:42.882 6225.920 - 6251.126: 4.2447% ( 48) 00:07:42.882 6251.126 - 6276.332: 4.5673% ( 51) 00:07:42.882 6276.332 - 6301.538: 4.8646% ( 47) 00:07:42.882 6301.538 - 6326.745: 5.2189% ( 56) 00:07:42.882 6326.745 - 6351.951: 5.5605% ( 54) 00:07:42.882 6351.951 - 6377.157: 5.8768% ( 50) 00:07:42.882 6377.157 - 6402.363: 6.2563% ( 60) 00:07:42.882 6402.363 - 6427.569: 6.6169% ( 57) 00:07:42.882 6427.569 - 6452.775: 6.9838% ( 58) 00:07:42.882 6452.775 - 6503.188: 7.6860% ( 111) 00:07:42.882 6503.188 - 6553.600: 8.4198% ( 116) 00:07:42.882 6553.600 - 6604.012: 9.2801% ( 136) 00:07:42.882 6604.012 - 6654.425: 10.0329% ( 119) 00:07:42.882 6654.425 - 6704.837: 10.7287% ( 110) 00:07:42.882 6704.837 - 6755.249: 11.4626% ( 116) 00:07:42.882 6755.249 - 6805.662: 11.9749% ( 81) 00:07:42.882 6805.662 - 6856.074: 12.4178% ( 70) 00:07:42.882 6856.074 - 6906.486: 12.8479% ( 68) 00:07:42.882 6906.486 - 6956.898: 13.3287% ( 76) 00:07:42.882 6956.898 - 7007.311: 13.7842% ( 72) 00:07:42.882 7007.311 - 7057.723: 14.2649% ( 76) 00:07:42.882 7057.723 - 7108.135: 14.7710% ( 80) 00:07:42.882 7108.135 - 7158.548: 15.3593% ( 93) 00:07:42.882 7158.548 - 7208.960: 16.1627% ( 127) 00:07:42.882 7208.960 - 7259.372: 16.9281% ( 121) 00:07:42.882 7259.372 - 7309.785: 17.8074% ( 139) 00:07:42.882 7309.785 - 7360.197: 18.8955% ( 172) 00:07:42.882 7360.197 - 7410.609: 20.3378% ( 228) 00:07:42.882 7410.609 - 7461.022: 22.0205% ( 266) 00:07:42.882 7461.022 - 7511.434: 24.0827% ( 326) 00:07:42.882 7511.434 - 7561.846: 26.6068% ( 399) 00:07:42.882 7561.846 - 7612.258: 29.4281% ( 446) 00:07:42.882 7612.258 - 7662.671: 32.6227% ( 505) 00:07:42.882 7662.671 - 7713.083: 35.9881% ( 532) 00:07:42.882 7713.083 - 7763.495: 39.7394% ( 593) 00:07:42.882 7763.495 - 7813.908: 43.3704% ( 574) 00:07:42.882 7813.908 - 7864.320: 47.0901% ( 588) 00:07:42.882 7864.320 - 7914.732: 50.7971% ( 586) 00:07:42.882 7914.732 - 7965.145: 54.5989% ( 601) 00:07:42.882 7965.145 - 8015.557: 58.4451% ( 608) 00:07:42.882 8015.557 - 8065.969: 62.1964% ( 593) 00:07:42.882 8065.969 - 8116.382: 65.8970% ( 585) 00:07:42.882 8116.382 - 8166.794: 69.4395% ( 560) 00:07:42.882 8166.794 - 8217.206: 72.7606% ( 525) 00:07:42.882 8217.206 - 8267.618: 75.7781% ( 477) 00:07:42.882 8267.618 - 8318.031: 78.3591% ( 408) 00:07:42.882 8318.031 - 8368.443: 80.6490% ( 362) 00:07:42.882 8368.443 - 8418.855: 82.5658% ( 303) 00:07:42.882 8418.855 - 8469.268: 84.0903% ( 241) 00:07:42.882 8469.268 - 8519.680: 85.3871% ( 205) 00:07:42.882 8519.680 - 8570.092: 86.4436% ( 167) 00:07:42.882 8570.092 - 8620.505: 87.2596% ( 129) 00:07:42.882 8620.505 - 8670.917: 88.0314% ( 122) 00:07:42.882 8670.917 - 8721.329: 88.6387% ( 96) 00:07:42.882 8721.329 - 8771.742: 89.1258% ( 77) 00:07:42.882 8771.742 - 8822.154: 89.5433% ( 66) 00:07:42.882 8822.154 - 8872.566: 89.8532% ( 49) 00:07:42.882 8872.566 - 8922.978: 90.0873% ( 37) 00:07:42.882 8922.978 - 8973.391: 90.3214% ( 37) 00:07:42.882 8973.391 - 9023.803: 90.5617% ( 38) 00:07:42.882 9023.803 - 9074.215: 90.7768% ( 34) 00:07:42.882 9074.215 - 9124.628: 90.9919% ( 34) 00:07:42.882 9124.628 - 9175.040: 91.1564% ( 26) 00:07:42.882 9175.040 - 9225.452: 91.3335% ( 28) 00:07:42.882 9225.452 - 9275.865: 91.5043% ( 27) 00:07:42.882 9275.865 - 9326.277: 91.6435% ( 22) 00:07:42.882 9326.277 - 9376.689: 91.8143% ( 27) 00:07:42.882 9376.689 - 9427.102: 91.9408% ( 20) 00:07:42.882 9427.102 - 9477.514: 92.0926% ( 24) 00:07:42.882 9477.514 - 9527.926: 92.2128% ( 19) 00:07:42.882 9527.926 - 9578.338: 92.3520% ( 22) 00:07:42.882 9578.338 - 9628.751: 92.5101% ( 25) 00:07:42.882 9628.751 - 9679.163: 92.6556% ( 23) 00:07:42.882 9679.163 - 9729.575: 92.7948% ( 22) 00:07:42.882 9729.575 - 9779.988: 92.9656% ( 27) 00:07:42.882 9779.988 - 9830.400: 93.1237% ( 25) 00:07:42.883 9830.400 - 9880.812: 93.2819% ( 25) 00:07:42.883 9880.812 - 9931.225: 93.4400% ( 25) 00:07:42.883 9931.225 - 9981.637: 93.5919% ( 24) 00:07:42.883 9981.637 - 10032.049: 93.7373% ( 23) 00:07:42.883 10032.049 - 10082.462: 93.8575% ( 19) 00:07:42.883 10082.462 - 10132.874: 93.9777% ( 19) 00:07:42.883 10132.874 - 10183.286: 94.1296% ( 24) 00:07:42.883 10183.286 - 10233.698: 94.2751% ( 23) 00:07:42.883 10233.698 - 10284.111: 94.4712% ( 31) 00:07:42.883 10284.111 - 10334.523: 94.6420% ( 27) 00:07:42.883 10334.523 - 10384.935: 94.8191% ( 28) 00:07:42.883 10384.935 - 10435.348: 94.9709% ( 24) 00:07:42.883 10435.348 - 10485.760: 95.0974% ( 20) 00:07:42.883 10485.760 - 10536.172: 95.2429% ( 23) 00:07:42.883 10536.172 - 10586.585: 95.3821% ( 22) 00:07:42.883 10586.585 - 10636.997: 95.4960% ( 18) 00:07:42.883 10636.997 - 10687.409: 95.6414% ( 23) 00:07:42.883 10687.409 - 10737.822: 95.7743% ( 21) 00:07:42.883 10737.822 - 10788.234: 95.8882% ( 18) 00:07:42.883 10788.234 - 10838.646: 96.0020% ( 18) 00:07:42.883 10838.646 - 10889.058: 96.1096% ( 17) 00:07:42.883 10889.058 - 10939.471: 96.2298% ( 19) 00:07:42.883 10939.471 - 10989.883: 96.3183% ( 14) 00:07:42.883 10989.883 - 11040.295: 96.4069% ( 14) 00:07:42.883 11040.295 - 11090.708: 96.4701% ( 10) 00:07:42.883 11090.708 - 11141.120: 96.5207% ( 8) 00:07:42.883 11141.120 - 11191.532: 96.5777% ( 9) 00:07:42.883 11191.532 - 11241.945: 96.6346% ( 9) 00:07:42.883 11241.945 - 11292.357: 96.7169% ( 13) 00:07:42.883 11292.357 - 11342.769: 96.7864% ( 11) 00:07:42.883 11342.769 - 11393.182: 96.8623% ( 12) 00:07:42.883 11393.182 - 11443.594: 96.9319% ( 11) 00:07:42.883 11443.594 - 11494.006: 97.0458% ( 18) 00:07:42.883 11494.006 - 11544.418: 97.1470% ( 16) 00:07:42.883 11544.418 - 11594.831: 97.2293% ( 13) 00:07:42.883 11594.831 - 11645.243: 97.3052% ( 12) 00:07:42.883 11645.243 - 11695.655: 97.3811% ( 12) 00:07:42.883 11695.655 - 11746.068: 97.4570% ( 12) 00:07:42.883 11746.068 - 11796.480: 97.5329% ( 12) 00:07:42.883 11796.480 - 11846.892: 97.6151% ( 13) 00:07:42.883 11846.892 - 11897.305: 97.6910% ( 12) 00:07:42.883 11897.305 - 11947.717: 97.7733% ( 13) 00:07:42.883 11947.717 - 11998.129: 97.8429% ( 11) 00:07:42.883 11998.129 - 12048.542: 97.9251% ( 13) 00:07:42.883 12048.542 - 12098.954: 98.0073% ( 13) 00:07:42.883 12098.954 - 12149.366: 98.0453% ( 6) 00:07:42.883 12149.366 - 12199.778: 98.0959% ( 8) 00:07:42.883 12199.778 - 12250.191: 98.1592% ( 10) 00:07:42.883 12250.191 - 12300.603: 98.1781% ( 3) 00:07:42.883 12300.603 - 12351.015: 98.2034% ( 4) 00:07:42.883 12351.015 - 12401.428: 98.2351% ( 5) 00:07:42.883 12401.428 - 12451.840: 98.2604% ( 4) 00:07:42.883 12451.840 - 12502.252: 98.2920% ( 5) 00:07:42.883 12502.252 - 12552.665: 98.3363% ( 7) 00:07:42.883 12552.665 - 12603.077: 98.3742% ( 6) 00:07:42.883 12603.077 - 12653.489: 98.4122% ( 6) 00:07:42.883 12653.489 - 12703.902: 98.4502% ( 6) 00:07:42.883 12703.902 - 12754.314: 98.4944% ( 7) 00:07:42.883 12754.314 - 12804.726: 98.5387% ( 7) 00:07:42.883 12804.726 - 12855.138: 98.5767% ( 6) 00:07:42.883 12855.138 - 12905.551: 98.6020% ( 4) 00:07:42.883 12905.551 - 13006.375: 98.6526% ( 8) 00:07:42.883 13006.375 - 13107.200: 98.7032% ( 8) 00:07:42.883 13107.200 - 13208.025: 98.7601% ( 9) 00:07:42.883 13208.025 - 13308.849: 98.8107% ( 8) 00:07:42.883 13308.849 - 13409.674: 98.8613% ( 8) 00:07:42.883 13409.674 - 13510.498: 98.9056% ( 7) 00:07:42.883 13510.498 - 13611.323: 98.9626% ( 9) 00:07:42.883 13611.323 - 13712.148: 99.0132% ( 8) 00:07:42.883 13712.148 - 13812.972: 99.0638% ( 8) 00:07:42.883 13812.972 - 13913.797: 99.1144% ( 8) 00:07:42.883 13913.797 - 14014.622: 99.1650% ( 8) 00:07:42.883 14014.622 - 14115.446: 99.1776% ( 2) 00:07:42.883 14115.446 - 14216.271: 99.1903% ( 2) 00:07:42.883 14317.095 - 14417.920: 99.1966% ( 1) 00:07:42.883 14417.920 - 14518.745: 99.2219% ( 4) 00:07:42.883 14518.745 - 14619.569: 99.2535% ( 5) 00:07:42.883 14619.569 - 14720.394: 99.2788% ( 4) 00:07:42.883 14720.394 - 14821.218: 99.3041% ( 4) 00:07:42.883 14821.218 - 14922.043: 99.3295% ( 4) 00:07:42.883 14922.043 - 15022.868: 99.3548% ( 4) 00:07:42.883 15022.868 - 15123.692: 99.3801% ( 4) 00:07:42.883 15123.692 - 15224.517: 99.4117% ( 5) 00:07:42.883 15224.517 - 15325.342: 99.4370% ( 4) 00:07:42.883 15325.342 - 15426.166: 99.4623% ( 4) 00:07:42.883 15426.166 - 15526.991: 99.4876% ( 4) 00:07:42.883 15526.991 - 15627.815: 99.5129% ( 4) 00:07:42.883 15627.815 - 15728.640: 99.5445% ( 5) 00:07:42.883 15728.640 - 15829.465: 99.5698% ( 4) 00:07:42.883 15829.465 - 15930.289: 99.5951% ( 4) 00:07:42.883 21374.818 - 21475.643: 99.6078% ( 2) 00:07:42.883 21475.643 - 21576.468: 99.6331% ( 4) 00:07:42.883 21576.468 - 21677.292: 99.6647% ( 5) 00:07:42.883 21677.292 - 21778.117: 99.6900% ( 4) 00:07:42.883 21778.117 - 21878.942: 99.7153% ( 4) 00:07:42.883 21878.942 - 21979.766: 99.7406% ( 4) 00:07:42.883 21979.766 - 22080.591: 99.7659% ( 4) 00:07:42.883 22080.591 - 22181.415: 99.7976% ( 5) 00:07:42.883 22181.415 - 22282.240: 99.8229% ( 4) 00:07:42.883 22282.240 - 22383.065: 99.8482% ( 4) 00:07:42.883 22383.065 - 22483.889: 99.8735% ( 4) 00:07:42.883 22483.889 - 22584.714: 99.8988% ( 4) 00:07:42.883 22584.714 - 22685.538: 99.9241% ( 4) 00:07:42.883 22685.538 - 22786.363: 99.9494% ( 4) 00:07:42.883 22786.363 - 22887.188: 99.9810% ( 5) 00:07:42.883 22887.188 - 22988.012: 100.0000% ( 3) 00:07:42.883 00:07:42.883 12:40:08 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:43.825 Initializing NVMe Controllers 00:07:43.825 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:43.825 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:43.825 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:43.825 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:43.825 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:43.825 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:43.826 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:43.826 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:43.826 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:43.826 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:43.826 Initialization complete. Launching workers. 00:07:43.826 ======================================================== 00:07:43.826 Latency(us) 00:07:43.826 Device Information : IOPS MiB/s Average min max 00:07:43.826 PCIE (0000:00:10.0) NSID 1 from core 0: 17553.03 205.70 7301.06 5666.23 32612.83 00:07:43.826 PCIE (0000:00:11.0) NSID 1 from core 0: 17553.03 205.70 7288.91 5775.61 30645.82 00:07:43.826 PCIE (0000:00:13.0) NSID 1 from core 0: 17553.03 205.70 7276.54 5657.65 28824.76 00:07:43.826 PCIE (0000:00:12.0) NSID 1 from core 0: 17553.03 205.70 7264.67 5626.81 26960.10 00:07:43.826 PCIE (0000:00:12.0) NSID 2 from core 0: 17553.03 205.70 7252.99 5787.23 25113.38 00:07:43.826 PCIE (0000:00:12.0) NSID 3 from core 0: 17616.86 206.45 7215.00 5652.77 20057.83 00:07:43.826 ======================================================== 00:07:43.826 Total : 105382.00 1234.95 7266.50 5626.81 32612.83 00:07:43.826 00:07:43.826 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:43.826 ================================================================================= 00:07:43.826 1.00000% : 5948.652us 00:07:43.826 10.00000% : 6276.332us 00:07:43.826 25.00000% : 6503.188us 00:07:43.826 50.00000% : 6805.662us 00:07:43.826 75.00000% : 7360.197us 00:07:43.826 90.00000% : 8469.268us 00:07:43.826 95.00000% : 10384.935us 00:07:43.826 98.00000% : 12199.778us 00:07:43.826 99.00000% : 13107.200us 00:07:43.826 99.50000% : 27020.997us 00:07:43.826 99.90000% : 32263.877us 00:07:43.826 99.99000% : 32667.175us 00:07:43.826 99.99900% : 32667.175us 00:07:43.826 99.99990% : 32667.175us 00:07:43.826 99.99999% : 32667.175us 00:07:43.826 00:07:43.826 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:43.826 ================================================================================= 00:07:43.826 1.00000% : 6074.683us 00:07:43.826 10.00000% : 6326.745us 00:07:43.826 25.00000% : 6503.188us 00:07:43.826 50.00000% : 6755.249us 00:07:43.826 75.00000% : 7309.785us 00:07:43.826 90.00000% : 8519.680us 00:07:43.826 95.00000% : 10132.874us 00:07:43.826 98.00000% : 12098.954us 00:07:43.826 99.00000% : 13308.849us 00:07:43.827 99.50000% : 25206.154us 00:07:43.827 99.90000% : 30247.385us 00:07:43.827 99.99000% : 30650.683us 00:07:43.827 99.99900% : 30650.683us 00:07:43.827 99.99990% : 30650.683us 00:07:43.827 99.99999% : 30650.683us 00:07:43.827 00:07:43.827 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:43.827 ================================================================================= 00:07:43.827 1.00000% : 6024.271us 00:07:43.827 10.00000% : 6326.745us 00:07:43.827 25.00000% : 6553.600us 00:07:43.827 50.00000% : 6755.249us 00:07:43.827 75.00000% : 7309.785us 00:07:43.827 90.00000% : 8418.855us 00:07:43.827 95.00000% : 10032.049us 00:07:43.827 98.00000% : 12552.665us 00:07:43.827 99.00000% : 13107.200us 00:07:43.827 99.50000% : 23794.609us 00:07:43.827 99.90000% : 28432.542us 00:07:43.827 99.99000% : 28835.840us 00:07:43.827 99.99900% : 28835.840us 00:07:43.827 99.99990% : 28835.840us 00:07:43.827 99.99999% : 28835.840us 00:07:43.827 00:07:43.827 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:43.827 ================================================================================= 00:07:43.827 1.00000% : 6074.683us 00:07:43.827 10.00000% : 6351.951us 00:07:43.827 25.00000% : 6503.188us 00:07:43.827 50.00000% : 6755.249us 00:07:43.827 75.00000% : 7309.785us 00:07:43.827 90.00000% : 8469.268us 00:07:43.827 95.00000% : 10132.874us 00:07:43.827 98.00000% : 12502.252us 00:07:43.827 99.00000% : 13308.849us 00:07:43.827 99.50000% : 22080.591us 00:07:43.827 99.90000% : 26617.698us 00:07:43.827 99.99000% : 27020.997us 00:07:43.827 99.99900% : 27020.997us 00:07:43.827 99.99990% : 27020.997us 00:07:43.827 99.99999% : 27020.997us 00:07:43.827 00:07:43.827 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:43.827 ================================================================================= 00:07:43.827 1.00000% : 6049.477us 00:07:43.827 10.00000% : 6326.745us 00:07:43.827 25.00000% : 6553.600us 00:07:43.827 50.00000% : 6755.249us 00:07:43.827 75.00000% : 7309.785us 00:07:43.827 90.00000% : 8469.268us 00:07:43.827 95.00000% : 9981.637us 00:07:43.827 98.00000% : 12451.840us 00:07:43.827 99.00000% : 13308.849us 00:07:43.827 99.50000% : 20164.923us 00:07:43.827 99.90000% : 24702.031us 00:07:43.827 99.99000% : 25105.329us 00:07:43.827 99.99900% : 25206.154us 00:07:43.827 99.99990% : 25206.154us 00:07:43.827 99.99999% : 25206.154us 00:07:43.827 00:07:43.828 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:43.828 ================================================================================= 00:07:43.828 1.00000% : 6049.477us 00:07:43.828 10.00000% : 6351.951us 00:07:43.828 25.00000% : 6503.188us 00:07:43.828 50.00000% : 6755.249us 00:07:43.828 75.00000% : 7309.785us 00:07:43.828 90.00000% : 8418.855us 00:07:43.828 95.00000% : 10183.286us 00:07:43.828 98.00000% : 12300.603us 00:07:43.828 99.00000% : 13308.849us 00:07:43.828 99.50000% : 14518.745us 00:07:43.828 99.90000% : 19660.800us 00:07:43.828 99.99000% : 20064.098us 00:07:43.828 99.99900% : 20064.098us 00:07:43.828 99.99990% : 20064.098us 00:07:43.828 99.99999% : 20064.098us 00:07:43.828 00:07:43.828 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:43.828 ============================================================================== 00:07:43.828 Range in us Cumulative IO count 00:07:43.828 5646.178 - 5671.385: 0.0057% ( 1) 00:07:43.828 5671.385 - 5696.591: 0.0284% ( 4) 00:07:43.828 5696.591 - 5721.797: 0.0511% ( 4) 00:07:43.828 5721.797 - 5747.003: 0.0795% ( 5) 00:07:43.828 5747.003 - 5772.209: 0.1080% ( 5) 00:07:43.828 5772.209 - 5797.415: 0.1307% ( 4) 00:07:43.828 5797.415 - 5822.622: 0.2159% ( 15) 00:07:43.828 5822.622 - 5847.828: 0.4034% ( 33) 00:07:43.828 5847.828 - 5873.034: 0.5341% ( 23) 00:07:43.828 5873.034 - 5898.240: 0.6534% ( 21) 00:07:43.828 5898.240 - 5923.446: 0.8182% ( 29) 00:07:43.828 5923.446 - 5948.652: 1.0057% ( 33) 00:07:43.828 5948.652 - 5973.858: 1.1875% ( 32) 00:07:43.828 5973.858 - 5999.065: 1.5398% ( 62) 00:07:43.828 5999.065 - 6024.271: 1.9432% ( 71) 00:07:43.828 6024.271 - 6049.477: 2.5341% ( 104) 00:07:43.828 6049.477 - 6074.683: 3.1989% ( 117) 00:07:43.828 6074.683 - 6099.889: 3.8920% ( 122) 00:07:43.829 6099.889 - 6125.095: 4.6591% ( 135) 00:07:43.829 6125.095 - 6150.302: 5.3693% ( 125) 00:07:43.829 6150.302 - 6175.508: 6.2727% ( 159) 00:07:43.829 6175.508 - 6200.714: 7.5000% ( 216) 00:07:43.829 6200.714 - 6225.920: 8.6705% ( 206) 00:07:43.829 6225.920 - 6251.126: 9.9602% ( 227) 00:07:43.829 6251.126 - 6276.332: 11.3466% ( 244) 00:07:43.829 6276.332 - 6301.538: 12.6932% ( 237) 00:07:43.829 6301.538 - 6326.745: 14.4659% ( 312) 00:07:43.829 6326.745 - 6351.951: 16.4148% ( 343) 00:07:43.829 6351.951 - 6377.157: 18.2784% ( 328) 00:07:43.829 6377.157 - 6402.363: 20.5568% ( 401) 00:07:43.829 6402.363 - 6427.569: 22.8125% ( 397) 00:07:43.829 6427.569 - 6452.775: 24.9432% ( 375) 00:07:43.829 6452.775 - 6503.188: 29.1420% ( 739) 00:07:43.829 6503.188 - 6553.600: 32.8523% ( 653) 00:07:43.829 6553.600 - 6604.012: 37.1761% ( 761) 00:07:43.829 6604.012 - 6654.425: 41.4602% ( 754) 00:07:43.829 6654.425 - 6704.837: 45.1875% ( 656) 00:07:43.829 6704.837 - 6755.249: 48.9830% ( 668) 00:07:43.829 6755.249 - 6805.662: 52.2386% ( 573) 00:07:43.829 6805.662 - 6856.074: 55.2273% ( 526) 00:07:43.829 6856.074 - 6906.486: 57.9943% ( 487) 00:07:43.829 6906.486 - 6956.898: 60.7784% ( 490) 00:07:43.829 6956.898 - 7007.311: 63.2273% ( 431) 00:07:43.829 7007.311 - 7057.723: 65.5057% ( 401) 00:07:43.829 7057.723 - 7108.135: 67.7955% ( 403) 00:07:43.829 7108.135 - 7158.548: 69.9773% ( 384) 00:07:43.829 7158.548 - 7208.960: 71.7557% ( 313) 00:07:43.829 7208.960 - 7259.372: 73.4091% ( 291) 00:07:43.829 7259.372 - 7309.785: 74.9318% ( 268) 00:07:43.829 7309.785 - 7360.197: 76.2045% ( 224) 00:07:43.829 7360.197 - 7410.609: 77.3636% ( 204) 00:07:43.829 7410.609 - 7461.022: 78.5000% ( 200) 00:07:43.829 7461.022 - 7511.434: 79.3125% ( 143) 00:07:43.829 7511.434 - 7561.846: 79.9886% ( 119) 00:07:43.829 7561.846 - 7612.258: 80.6250% ( 112) 00:07:43.829 7612.258 - 7662.671: 81.1534% ( 93) 00:07:43.829 7662.671 - 7713.083: 81.8239% ( 118) 00:07:43.829 7713.083 - 7763.495: 82.4489% ( 110) 00:07:43.829 7763.495 - 7813.908: 83.1761% ( 128) 00:07:43.829 7813.908 - 7864.320: 83.8182% ( 113) 00:07:43.829 7864.320 - 7914.732: 84.5341% ( 126) 00:07:43.829 7914.732 - 7965.145: 85.1477% ( 108) 00:07:43.830 7965.145 - 8015.557: 85.6364% ( 86) 00:07:43.830 8015.557 - 8065.969: 86.2727% ( 112) 00:07:43.830 8065.969 - 8116.382: 86.7500% ( 84) 00:07:43.830 8116.382 - 8166.794: 87.3750% ( 110) 00:07:43.830 8166.794 - 8217.206: 87.9034% ( 93) 00:07:43.830 8217.206 - 8267.618: 88.4261% ( 92) 00:07:43.830 8267.618 - 8318.031: 89.0000% ( 101) 00:07:43.830 8318.031 - 8368.443: 89.5341% ( 94) 00:07:43.830 8368.443 - 8418.855: 89.8920% ( 63) 00:07:43.830 8418.855 - 8469.268: 90.1875% ( 52) 00:07:43.830 8469.268 - 8519.680: 90.4943% ( 54) 00:07:43.830 8519.680 - 8570.092: 90.8125% ( 56) 00:07:43.830 8570.092 - 8620.505: 91.1420% ( 58) 00:07:43.830 8620.505 - 8670.917: 91.4830% ( 60) 00:07:43.830 8670.917 - 8721.329: 91.8068% ( 57) 00:07:43.830 8721.329 - 8771.742: 92.1875% ( 67) 00:07:43.830 8771.742 - 8822.154: 92.4545% ( 47) 00:07:43.830 8822.154 - 8872.566: 92.7443% ( 51) 00:07:43.830 8872.566 - 8922.978: 92.9602% ( 38) 00:07:43.830 8922.978 - 8973.391: 93.1023% ( 25) 00:07:43.830 8973.391 - 9023.803: 93.2159% ( 20) 00:07:43.830 9023.803 - 9074.215: 93.3125% ( 17) 00:07:43.830 9074.215 - 9124.628: 93.4034% ( 16) 00:07:43.830 9124.628 - 9175.040: 93.4830% ( 14) 00:07:43.830 9175.040 - 9225.452: 93.5625% ( 14) 00:07:43.830 9225.452 - 9275.865: 93.6705% ( 19) 00:07:43.830 9275.865 - 9326.277: 93.7500% ( 14) 00:07:43.830 9326.277 - 9376.689: 93.8580% ( 19) 00:07:43.830 9376.689 - 9427.102: 93.9545% ( 17) 00:07:43.830 9427.102 - 9477.514: 94.0227% ( 12) 00:07:43.830 9477.514 - 9527.926: 94.1023% ( 14) 00:07:43.830 9527.926 - 9578.338: 94.1761% ( 13) 00:07:43.830 9578.338 - 9628.751: 94.2898% ( 20) 00:07:43.830 9628.751 - 9679.163: 94.3636% ( 13) 00:07:43.830 9679.163 - 9729.575: 94.4375% ( 13) 00:07:43.830 9729.575 - 9779.988: 94.4943% ( 10) 00:07:43.830 9779.988 - 9830.400: 94.5227% ( 5) 00:07:43.830 9830.400 - 9880.812: 94.5739% ( 9) 00:07:43.830 9880.812 - 9931.225: 94.6136% ( 7) 00:07:43.830 9931.225 - 9981.637: 94.6477% ( 6) 00:07:43.830 9981.637 - 10032.049: 94.6932% ( 8) 00:07:43.830 10032.049 - 10082.462: 94.7273% ( 6) 00:07:43.830 10082.462 - 10132.874: 94.7614% ( 6) 00:07:43.830 10132.874 - 10183.286: 94.8068% ( 8) 00:07:43.830 10183.286 - 10233.698: 94.8466% ( 7) 00:07:43.830 10233.698 - 10284.111: 94.8864% ( 7) 00:07:43.831 10284.111 - 10334.523: 94.9545% ( 12) 00:07:43.831 10334.523 - 10384.935: 95.0170% ( 11) 00:07:43.831 10384.935 - 10435.348: 95.0852% ( 12) 00:07:43.831 10435.348 - 10485.760: 95.1364% ( 9) 00:07:43.831 10485.760 - 10536.172: 95.1989% ( 11) 00:07:43.831 10536.172 - 10586.585: 95.2670% ( 12) 00:07:43.831 10586.585 - 10636.997: 95.3409% ( 13) 00:07:43.831 10636.997 - 10687.409: 95.4716% ( 23) 00:07:43.831 10687.409 - 10737.822: 95.5568% ( 15) 00:07:43.831 10737.822 - 10788.234: 95.6023% ( 8) 00:07:43.831 10788.234 - 10838.646: 95.6534% ( 9) 00:07:43.831 10838.646 - 10889.058: 95.7159% ( 11) 00:07:43.831 10889.058 - 10939.471: 95.7614% ( 8) 00:07:43.831 10939.471 - 10989.883: 95.8636% ( 18) 00:07:43.831 10989.883 - 11040.295: 96.0739% ( 37) 00:07:43.831 11040.295 - 11090.708: 96.1818% ( 19) 00:07:43.831 11090.708 - 11141.120: 96.2955% ( 20) 00:07:43.831 11141.120 - 11191.532: 96.4091% ( 20) 00:07:43.831 11191.532 - 11241.945: 96.5057% ( 17) 00:07:43.831 11241.945 - 11292.357: 96.6023% ( 17) 00:07:43.831 11292.357 - 11342.769: 96.6875% ( 15) 00:07:43.831 11342.769 - 11393.182: 96.7557% ( 12) 00:07:43.831 11393.182 - 11443.594: 96.8409% ( 15) 00:07:43.831 11443.594 - 11494.006: 96.9489% ( 19) 00:07:43.831 11494.006 - 11544.418: 97.0455% ( 17) 00:07:43.831 11544.418 - 11594.831: 97.1307% ( 15) 00:07:43.831 11594.831 - 11645.243: 97.2443% ( 20) 00:07:43.831 11645.243 - 11695.655: 97.3466% ( 18) 00:07:43.831 11695.655 - 11746.068: 97.4830% ( 24) 00:07:43.831 11746.068 - 11796.480: 97.5966% ( 20) 00:07:43.831 11796.480 - 11846.892: 97.6761% ( 14) 00:07:43.831 11846.892 - 11897.305: 97.7386% ( 11) 00:07:43.831 11897.305 - 11947.717: 97.7841% ( 8) 00:07:43.831 11947.717 - 11998.129: 97.8295% ( 8) 00:07:43.831 11998.129 - 12048.542: 97.8864% ( 10) 00:07:43.831 12048.542 - 12098.954: 97.9375% ( 9) 00:07:43.831 12098.954 - 12149.366: 97.9602% ( 4) 00:07:43.831 12149.366 - 12199.778: 98.0000% ( 7) 00:07:43.831 12199.778 - 12250.191: 98.0170% ( 3) 00:07:43.831 12250.191 - 12300.603: 98.0739% ( 10) 00:07:43.831 12300.603 - 12351.015: 98.2102% ( 24) 00:07:43.831 12351.015 - 12401.428: 98.4432% ( 41) 00:07:43.831 12401.428 - 12451.840: 98.4773% ( 6) 00:07:43.831 12451.840 - 12502.252: 98.5057% ( 5) 00:07:43.831 12502.252 - 12552.665: 98.5341% ( 5) 00:07:43.831 12552.665 - 12603.077: 98.5682% ( 6) 00:07:43.831 12603.077 - 12653.489: 98.6080% ( 7) 00:07:43.831 12653.489 - 12703.902: 98.6307% ( 4) 00:07:43.831 12703.902 - 12754.314: 98.6534% ( 4) 00:07:43.831 12754.314 - 12804.726: 98.6989% ( 8) 00:07:43.831 12804.726 - 12855.138: 98.7386% ( 7) 00:07:43.831 12855.138 - 12905.551: 98.7443% ( 1) 00:07:43.831 12905.551 - 13006.375: 98.8580% ( 20) 00:07:43.832 13006.375 - 13107.200: 99.0114% ( 27) 00:07:43.832 13107.200 - 13208.025: 99.0682% ( 10) 00:07:43.832 13208.025 - 13308.849: 99.1193% ( 9) 00:07:43.832 13308.849 - 13409.674: 99.1818% ( 11) 00:07:43.832 13409.674 - 13510.498: 99.2443% ( 11) 00:07:43.832 13510.498 - 13611.323: 99.2727% ( 5) 00:07:43.832 26214.400 - 26416.049: 99.3239% ( 9) 00:07:43.832 26416.049 - 26617.698: 99.4091% ( 15) 00:07:43.832 26617.698 - 26819.348: 99.4716% ( 11) 00:07:43.832 26819.348 - 27020.997: 99.5284% ( 10) 00:07:43.832 27020.997 - 27222.646: 99.5852% ( 10) 00:07:43.832 27222.646 - 27424.295: 99.6250% ( 7) 00:07:43.832 27424.295 - 27625.945: 99.6307% ( 1) 00:07:43.832 27625.945 - 27827.594: 99.6364% ( 1) 00:07:43.832 30650.683 - 30852.332: 99.6477% ( 2) 00:07:43.832 30852.332 - 31053.982: 99.6875% ( 7) 00:07:43.832 31053.982 - 31255.631: 99.7273% ( 7) 00:07:43.832 31255.631 - 31457.280: 99.7727% ( 8) 00:07:43.832 31457.280 - 31658.929: 99.8182% ( 8) 00:07:43.832 31658.929 - 31860.578: 99.8466% ( 5) 00:07:43.832 31860.578 - 32062.228: 99.8864% ( 7) 00:07:43.832 32062.228 - 32263.877: 99.9318% ( 8) 00:07:43.832 32263.877 - 32465.526: 99.9716% ( 7) 00:07:43.832 32465.526 - 32667.175: 100.0000% ( 5) 00:07:43.832 00:07:43.832 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:43.832 ============================================================================== 00:07:43.832 Range in us Cumulative IO count 00:07:43.832 5772.209 - 5797.415: 0.0114% ( 2) 00:07:43.832 5797.415 - 5822.622: 0.0284% ( 3) 00:07:43.832 5822.622 - 5847.828: 0.0398% ( 2) 00:07:43.832 5847.828 - 5873.034: 0.0511% ( 2) 00:07:43.832 5873.034 - 5898.240: 0.0739% ( 4) 00:07:43.832 5898.240 - 5923.446: 0.1193% ( 8) 00:07:43.832 5923.446 - 5948.652: 0.1648% ( 8) 00:07:43.832 5948.652 - 5973.858: 0.2216% ( 10) 00:07:43.832 5973.858 - 5999.065: 0.4205% ( 35) 00:07:43.832 5999.065 - 6024.271: 0.5398% ( 21) 00:07:43.832 6024.271 - 6049.477: 0.7386% ( 35) 00:07:43.832 6049.477 - 6074.683: 1.0398% ( 53) 00:07:43.832 6074.683 - 6099.889: 1.5739% ( 94) 00:07:43.832 6099.889 - 6125.095: 1.8977% ( 57) 00:07:43.832 6125.095 - 6150.302: 2.3523% ( 80) 00:07:43.832 6150.302 - 6175.508: 3.0682% ( 126) 00:07:43.832 6175.508 - 6200.714: 4.0170% ( 167) 00:07:43.832 6200.714 - 6225.920: 5.4034% ( 244) 00:07:43.832 6225.920 - 6251.126: 6.4773% ( 189) 00:07:43.832 6251.126 - 6276.332: 7.8864% ( 248) 00:07:43.832 6276.332 - 6301.538: 9.2159% ( 234) 00:07:43.832 6301.538 - 6326.745: 10.9432% ( 304) 00:07:43.832 6326.745 - 6351.951: 12.7102% ( 311) 00:07:43.832 6351.951 - 6377.157: 14.6477% ( 341) 00:07:43.832 6377.157 - 6402.363: 16.3864% ( 306) 00:07:43.832 6402.363 - 6427.569: 18.1080% ( 303) 00:07:43.832 6427.569 - 6452.775: 21.1193% ( 530) 00:07:43.832 6452.775 - 6503.188: 25.9148% ( 844) 00:07:43.832 6503.188 - 6553.600: 30.7670% ( 854) 00:07:43.832 6553.600 - 6604.012: 35.6818% ( 865) 00:07:43.832 6604.012 - 6654.425: 40.8523% ( 910) 00:07:43.832 6654.425 - 6704.837: 45.9943% ( 905) 00:07:43.832 6704.837 - 6755.249: 50.5625% ( 804) 00:07:43.833 6755.249 - 6805.662: 54.4943% ( 692) 00:07:43.833 6805.662 - 6856.074: 58.7330% ( 746) 00:07:43.833 6856.074 - 6906.486: 62.3295% ( 633) 00:07:43.833 6906.486 - 6956.898: 65.2386% ( 512) 00:07:43.833 6956.898 - 7007.311: 67.4659% ( 392) 00:07:43.833 7007.311 - 7057.723: 69.1477% ( 296) 00:07:43.833 7057.723 - 7108.135: 70.4148% ( 223) 00:07:43.833 7108.135 - 7158.548: 71.7443% ( 234) 00:07:43.833 7158.548 - 7208.960: 72.7955% ( 185) 00:07:43.833 7208.960 - 7259.372: 73.9261% ( 199) 00:07:43.833 7259.372 - 7309.785: 75.0966% ( 206) 00:07:43.833 7309.785 - 7360.197: 76.3352% ( 218) 00:07:43.833 7360.197 - 7410.609: 77.3182% ( 173) 00:07:43.833 7410.609 - 7461.022: 78.3636% ( 184) 00:07:43.833 7461.022 - 7511.434: 79.0966% ( 129) 00:07:43.833 7511.434 - 7561.846: 79.9659% ( 153) 00:07:43.833 7561.846 - 7612.258: 80.6932% ( 128) 00:07:43.833 7612.258 - 7662.671: 81.3864% ( 122) 00:07:43.833 7662.671 - 7713.083: 82.1761% ( 139) 00:07:43.833 7713.083 - 7763.495: 83.1989% ( 180) 00:07:43.833 7763.495 - 7813.908: 83.8068% ( 107) 00:07:43.833 7813.908 - 7864.320: 84.1477% ( 60) 00:07:43.833 7864.320 - 7914.732: 84.4886% ( 60) 00:07:43.833 7914.732 - 7965.145: 84.9773% ( 86) 00:07:43.833 7965.145 - 8015.557: 85.4830% ( 89) 00:07:43.833 8015.557 - 8065.969: 86.1477% ( 117) 00:07:43.833 8065.969 - 8116.382: 86.6875% ( 95) 00:07:43.833 8116.382 - 8166.794: 87.1364% ( 79) 00:07:43.833 8166.794 - 8217.206: 87.6875% ( 97) 00:07:43.833 8217.206 - 8267.618: 88.1136% ( 75) 00:07:43.833 8267.618 - 8318.031: 88.4886% ( 66) 00:07:43.833 8318.031 - 8368.443: 88.8864% ( 70) 00:07:43.833 8368.443 - 8418.855: 89.3295% ( 78) 00:07:43.833 8418.855 - 8469.268: 89.7955% ( 82) 00:07:43.833 8469.268 - 8519.680: 90.2216% ( 75) 00:07:43.833 8519.680 - 8570.092: 90.6932% ( 83) 00:07:43.833 8570.092 - 8620.505: 91.0852% ( 69) 00:07:43.833 8620.505 - 8670.917: 91.3920% ( 54) 00:07:43.833 8670.917 - 8721.329: 91.6364% ( 43) 00:07:43.833 8721.329 - 8771.742: 91.9716% ( 59) 00:07:43.833 8771.742 - 8822.154: 92.2330% ( 46) 00:07:43.833 8822.154 - 8872.566: 92.5284% ( 52) 00:07:43.833 8872.566 - 8922.978: 92.8750% ( 61) 00:07:43.833 8922.978 - 8973.391: 93.1023% ( 40) 00:07:43.833 8973.391 - 9023.803: 93.4545% ( 62) 00:07:43.834 9023.803 - 9074.215: 93.6534% ( 35) 00:07:43.834 9074.215 - 9124.628: 93.8125% ( 28) 00:07:43.834 9124.628 - 9175.040: 93.9432% ( 23) 00:07:43.834 9175.040 - 9225.452: 94.0511% ( 19) 00:07:43.834 9225.452 - 9275.865: 94.1307% ( 14) 00:07:43.834 9275.865 - 9326.277: 94.1989% ( 12) 00:07:43.834 9326.277 - 9376.689: 94.2386% ( 7) 00:07:43.834 9376.689 - 9427.102: 94.2614% ( 4) 00:07:43.834 9427.102 - 9477.514: 94.2784% ( 3) 00:07:43.834 9477.514 - 9527.926: 94.3068% ( 5) 00:07:43.834 9527.926 - 9578.338: 94.3352% ( 5) 00:07:43.834 9578.338 - 9628.751: 94.3466% ( 2) 00:07:43.834 9628.751 - 9679.163: 94.3580% ( 2) 00:07:43.834 9679.163 - 9729.575: 94.3807% ( 4) 00:07:43.834 9729.575 - 9779.988: 94.4205% ( 7) 00:07:43.834 9779.988 - 9830.400: 94.4830% ( 11) 00:07:43.834 9830.400 - 9880.812: 94.5739% ( 16) 00:07:43.834 9880.812 - 9931.225: 94.6761% ( 18) 00:07:43.834 9931.225 - 9981.637: 94.7898% ( 20) 00:07:43.834 9981.637 - 10032.049: 94.8636% ( 13) 00:07:43.834 10032.049 - 10082.462: 94.9261% ( 11) 00:07:43.834 10082.462 - 10132.874: 95.0114% ( 15) 00:07:43.834 10132.874 - 10183.286: 95.1364% ( 22) 00:07:43.834 10183.286 - 10233.698: 95.2330% ( 17) 00:07:43.834 10233.698 - 10284.111: 95.2955% ( 11) 00:07:43.834 10284.111 - 10334.523: 95.3693% ( 13) 00:07:43.834 10334.523 - 10384.935: 95.4205% ( 9) 00:07:43.834 10384.935 - 10435.348: 95.4716% ( 9) 00:07:43.834 10435.348 - 10485.760: 95.5284% ( 10) 00:07:43.834 10485.760 - 10536.172: 95.5795% ( 9) 00:07:43.834 10536.172 - 10586.585: 95.6307% ( 9) 00:07:43.834 10586.585 - 10636.997: 95.6875% ( 10) 00:07:43.834 10636.997 - 10687.409: 95.7386% ( 9) 00:07:43.834 10687.409 - 10737.822: 95.7841% ( 8) 00:07:43.834 10737.822 - 10788.234: 95.8352% ( 9) 00:07:43.834 10788.234 - 10838.646: 95.8864% ( 9) 00:07:43.834 10838.646 - 10889.058: 95.9261% ( 7) 00:07:43.834 10889.058 - 10939.471: 95.9432% ( 3) 00:07:43.834 10939.471 - 10989.883: 95.9773% ( 6) 00:07:43.834 10989.883 - 11040.295: 96.0114% ( 6) 00:07:43.834 11040.295 - 11090.708: 96.0795% ( 12) 00:07:43.834 11090.708 - 11141.120: 96.1136% ( 6) 00:07:43.834 11141.120 - 11191.532: 96.2216% ( 19) 00:07:43.834 11191.532 - 11241.945: 96.3068% ( 15) 00:07:43.834 11241.945 - 11292.357: 96.4091% ( 18) 00:07:43.835 11292.357 - 11342.769: 96.4716% ( 11) 00:07:43.835 11342.769 - 11393.182: 96.5398% ( 12) 00:07:43.835 11393.182 - 11443.594: 96.6250% ( 15) 00:07:43.835 11443.594 - 11494.006: 96.7045% ( 14) 00:07:43.835 11494.006 - 11544.418: 96.7841% ( 14) 00:07:43.835 11544.418 - 11594.831: 96.8693% ( 15) 00:07:43.835 11594.831 - 11645.243: 96.9830% ( 20) 00:07:43.835 11645.243 - 11695.655: 97.0795% ( 17) 00:07:43.835 11695.655 - 11746.068: 97.1932% ( 20) 00:07:43.835 11746.068 - 11796.480: 97.2898% ( 17) 00:07:43.835 11796.480 - 11846.892: 97.3750% ( 15) 00:07:43.835 11846.892 - 11897.305: 97.4432% ( 12) 00:07:43.835 11897.305 - 11947.717: 97.4773% ( 6) 00:07:43.835 11947.717 - 11998.129: 97.5511% ( 13) 00:07:43.835 11998.129 - 12048.542: 97.8125% ( 46) 00:07:43.835 12048.542 - 12098.954: 98.0227% ( 37) 00:07:43.835 12098.954 - 12149.366: 98.1420% ( 21) 00:07:43.835 12149.366 - 12199.778: 98.2557% ( 20) 00:07:43.835 12199.778 - 12250.191: 98.3693% ( 20) 00:07:43.835 12250.191 - 12300.603: 98.4091% ( 7) 00:07:43.835 12300.603 - 12351.015: 98.4432% ( 6) 00:07:43.835 12351.015 - 12401.428: 98.4943% ( 9) 00:07:43.835 12401.428 - 12451.840: 98.5455% ( 9) 00:07:43.835 12451.840 - 12502.252: 98.6080% ( 11) 00:07:43.835 12502.252 - 12552.665: 98.6477% ( 7) 00:07:43.835 12552.665 - 12603.077: 98.8011% ( 27) 00:07:43.835 12603.077 - 12653.489: 98.8295% ( 5) 00:07:43.835 12653.489 - 12703.902: 98.8466% ( 3) 00:07:43.835 12703.902 - 12754.314: 98.8636% ( 3) 00:07:43.835 12754.314 - 12804.726: 98.8807% ( 3) 00:07:43.835 12804.726 - 12855.138: 98.9091% ( 5) 00:07:43.835 12855.138 - 12905.551: 98.9205% ( 2) 00:07:43.835 13006.375 - 13107.200: 98.9261% ( 1) 00:07:43.835 13107.200 - 13208.025: 98.9886% ( 11) 00:07:43.835 13208.025 - 13308.849: 99.2159% ( 40) 00:07:43.835 13308.849 - 13409.674: 99.2557% ( 7) 00:07:43.835 13409.674 - 13510.498: 99.2727% ( 3) 00:07:43.835 24097.083 - 24197.908: 99.2955% ( 4) 00:07:43.835 24197.908 - 24298.732: 99.3125% ( 3) 00:07:43.835 24298.732 - 24399.557: 99.3352% ( 4) 00:07:43.835 24399.557 - 24500.382: 99.3580% ( 4) 00:07:43.835 24500.382 - 24601.206: 99.3807% ( 4) 00:07:43.835 24601.206 - 24702.031: 99.4034% ( 4) 00:07:43.835 24702.031 - 24802.855: 99.4205% ( 3) 00:07:43.835 24802.855 - 24903.680: 99.4432% ( 4) 00:07:43.835 24903.680 - 25004.505: 99.4659% ( 4) 00:07:43.836 25004.505 - 25105.329: 99.4886% ( 4) 00:07:43.836 25105.329 - 25206.154: 99.5114% ( 4) 00:07:43.836 25206.154 - 25306.978: 99.5341% ( 4) 00:07:43.836 25306.978 - 25407.803: 99.5568% ( 4) 00:07:43.836 25407.803 - 25508.628: 99.5795% ( 4) 00:07:43.836 25508.628 - 25609.452: 99.5966% ( 3) 00:07:43.836 25609.452 - 25710.277: 99.6193% ( 4) 00:07:43.836 25710.277 - 25811.102: 99.6364% ( 3) 00:07:43.836 28835.840 - 29037.489: 99.6477% ( 2) 00:07:43.836 29037.489 - 29239.138: 99.6875% ( 7) 00:07:43.836 29239.138 - 29440.788: 99.7330% ( 8) 00:07:43.836 29440.788 - 29642.437: 99.7727% ( 7) 00:07:43.836 29642.437 - 29844.086: 99.8182% ( 8) 00:07:43.836 29844.086 - 30045.735: 99.8636% ( 8) 00:07:43.836 30045.735 - 30247.385: 99.9091% ( 8) 00:07:43.836 30247.385 - 30449.034: 99.9545% ( 8) 00:07:43.836 30449.034 - 30650.683: 100.0000% ( 8) 00:07:43.836 00:07:43.836 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:43.836 ============================================================================== 00:07:43.836 Range in us Cumulative IO count 00:07:43.836 5646.178 - 5671.385: 0.0057% ( 1) 00:07:43.836 5696.591 - 5721.797: 0.0170% ( 2) 00:07:43.836 5721.797 - 5747.003: 0.0284% ( 2) 00:07:43.836 5747.003 - 5772.209: 0.0398% ( 2) 00:07:43.836 5772.209 - 5797.415: 0.0455% ( 1) 00:07:43.836 5822.622 - 5847.828: 0.0568% ( 2) 00:07:43.836 5847.828 - 5873.034: 0.1136% ( 10) 00:07:43.836 5873.034 - 5898.240: 0.1875% ( 13) 00:07:43.836 5898.240 - 5923.446: 0.3068% ( 21) 00:07:43.836 5923.446 - 5948.652: 0.4659% ( 28) 00:07:43.836 5948.652 - 5973.858: 0.6080% ( 25) 00:07:43.836 5973.858 - 5999.065: 0.7784% ( 30) 00:07:43.836 5999.065 - 6024.271: 1.0625% ( 50) 00:07:43.836 6024.271 - 6049.477: 1.3352% ( 48) 00:07:43.836 6049.477 - 6074.683: 1.6250% ( 51) 00:07:43.836 6074.683 - 6099.889: 2.1705% ( 96) 00:07:43.836 6099.889 - 6125.095: 2.7443% ( 101) 00:07:43.836 6125.095 - 6150.302: 3.1989% ( 80) 00:07:43.836 6150.302 - 6175.508: 3.8750% ( 119) 00:07:43.836 6175.508 - 6200.714: 4.5568% ( 120) 00:07:43.836 6200.714 - 6225.920: 5.4773% ( 162) 00:07:43.836 6225.920 - 6251.126: 6.5739% ( 193) 00:07:43.836 6251.126 - 6276.332: 7.6648% ( 192) 00:07:43.836 6276.332 - 6301.538: 9.1136% ( 255) 00:07:43.836 6301.538 - 6326.745: 10.4545% ( 236) 00:07:43.837 6326.745 - 6351.951: 12.1534% ( 299) 00:07:43.837 6351.951 - 6377.157: 14.0000% ( 325) 00:07:43.837 6377.157 - 6402.363: 16.1193% ( 373) 00:07:43.837 6402.363 - 6427.569: 18.5909% ( 435) 00:07:43.837 6427.569 - 6452.775: 20.4489% ( 327) 00:07:43.837 6452.775 - 6503.188: 24.9545% ( 793) 00:07:43.837 6503.188 - 6553.600: 30.0966% ( 905) 00:07:43.837 6553.600 - 6604.012: 35.4489% ( 942) 00:07:43.837 6604.012 - 6654.425: 40.7045% ( 925) 00:07:43.837 6654.425 - 6704.837: 46.5682% ( 1032) 00:07:43.837 6704.837 - 6755.249: 51.3239% ( 837) 00:07:43.837 6755.249 - 6805.662: 55.6193% ( 756) 00:07:43.837 6805.662 - 6856.074: 59.5341% ( 689) 00:07:43.837 6856.074 - 6906.486: 62.7670% ( 569) 00:07:43.837 6906.486 - 6956.898: 65.0909% ( 409) 00:07:43.837 6956.898 - 7007.311: 67.0057% ( 337) 00:07:43.837 7007.311 - 7057.723: 68.7216% ( 302) 00:07:43.837 7057.723 - 7108.135: 70.1818% ( 257) 00:07:43.837 7108.135 - 7158.548: 71.4261% ( 219) 00:07:43.837 7158.548 - 7208.960: 72.7841% ( 239) 00:07:43.837 7208.960 - 7259.372: 74.1364% ( 238) 00:07:43.837 7259.372 - 7309.785: 75.2898% ( 203) 00:07:43.837 7309.785 - 7360.197: 76.5170% ( 216) 00:07:43.837 7360.197 - 7410.609: 77.6307% ( 196) 00:07:43.837 7410.609 - 7461.022: 78.5739% ( 166) 00:07:43.837 7461.022 - 7511.434: 79.5966% ( 180) 00:07:43.837 7511.434 - 7561.846: 80.3409% ( 131) 00:07:43.837 7561.846 - 7612.258: 81.0852% ( 131) 00:07:43.837 7612.258 - 7662.671: 81.9659% ( 155) 00:07:43.837 7662.671 - 7713.083: 82.5568% ( 104) 00:07:43.837 7713.083 - 7763.495: 83.0852% ( 93) 00:07:43.837 7763.495 - 7813.908: 83.6420% ( 98) 00:07:43.837 7813.908 - 7864.320: 84.1761% ( 94) 00:07:43.837 7864.320 - 7914.732: 84.8068% ( 111) 00:07:43.837 7914.732 - 7965.145: 85.1989% ( 69) 00:07:43.837 7965.145 - 8015.557: 85.9034% ( 124) 00:07:43.837 8015.557 - 8065.969: 86.5341% ( 111) 00:07:43.837 8065.969 - 8116.382: 87.0739% ( 95) 00:07:43.837 8116.382 - 8166.794: 87.7727% ( 123) 00:07:43.837 8166.794 - 8217.206: 88.4205% ( 114) 00:07:43.837 8217.206 - 8267.618: 89.1648% ( 131) 00:07:43.837 8267.618 - 8318.031: 89.5227% ( 63) 00:07:43.837 8318.031 - 8368.443: 89.7898% ( 47) 00:07:43.837 8368.443 - 8418.855: 90.0114% ( 39) 00:07:43.838 8418.855 - 8469.268: 90.2216% ( 37) 00:07:43.838 8469.268 - 8519.680: 90.4432% ( 39) 00:07:43.838 8519.680 - 8570.092: 90.6932% ( 44) 00:07:43.838 8570.092 - 8620.505: 90.9943% ( 53) 00:07:43.838 8620.505 - 8670.917: 91.2045% ( 37) 00:07:43.838 8670.917 - 8721.329: 91.3750% ( 30) 00:07:43.838 8721.329 - 8771.742: 91.5739% ( 35) 00:07:43.838 8771.742 - 8822.154: 91.8182% ( 43) 00:07:43.838 8822.154 - 8872.566: 92.1534% ( 59) 00:07:43.838 8872.566 - 8922.978: 92.4489% ( 52) 00:07:43.838 8922.978 - 8973.391: 92.6648% ( 38) 00:07:43.838 8973.391 - 9023.803: 92.8352% ( 30) 00:07:43.838 9023.803 - 9074.215: 93.0227% ( 33) 00:07:43.838 9074.215 - 9124.628: 93.2102% ( 33) 00:07:43.838 9124.628 - 9175.040: 93.3352% ( 22) 00:07:43.838 9175.040 - 9225.452: 93.4489% ( 20) 00:07:43.838 9225.452 - 9275.865: 93.5341% ( 15) 00:07:43.838 9275.865 - 9326.277: 93.6648% ( 23) 00:07:43.838 9326.277 - 9376.689: 93.7216% ( 10) 00:07:43.838 9376.689 - 9427.102: 93.7955% ( 13) 00:07:43.838 9427.102 - 9477.514: 93.8693% ( 13) 00:07:43.838 9477.514 - 9527.926: 93.9432% ( 13) 00:07:43.838 9527.926 - 9578.338: 94.0227% ( 14) 00:07:43.838 9578.338 - 9628.751: 94.1364% ( 20) 00:07:43.838 9628.751 - 9679.163: 94.2727% ( 24) 00:07:43.838 9679.163 - 9729.575: 94.4261% ( 27) 00:07:43.838 9729.575 - 9779.988: 94.5682% ( 25) 00:07:43.838 9779.988 - 9830.400: 94.7386% ( 30) 00:07:43.838 9830.400 - 9880.812: 94.8466% ( 19) 00:07:43.838 9880.812 - 9931.225: 94.9261% ( 14) 00:07:43.838 9931.225 - 9981.637: 94.9943% ( 12) 00:07:43.838 9981.637 - 10032.049: 95.0568% ( 11) 00:07:43.838 10032.049 - 10082.462: 95.1250% ( 12) 00:07:43.838 10082.462 - 10132.874: 95.1705% ( 8) 00:07:43.838 10132.874 - 10183.286: 95.2330% ( 11) 00:07:43.838 10183.286 - 10233.698: 95.3011% ( 12) 00:07:43.838 10233.698 - 10284.111: 95.3693% ( 12) 00:07:43.838 10284.111 - 10334.523: 95.4489% ( 14) 00:07:43.838 10334.523 - 10384.935: 95.5284% ( 14) 00:07:43.838 10384.935 - 10435.348: 95.6136% ( 15) 00:07:43.838 10435.348 - 10485.760: 95.6818% ( 12) 00:07:43.838 10485.760 - 10536.172: 95.7443% ( 11) 00:07:43.838 10536.172 - 10586.585: 95.8068% ( 11) 00:07:43.838 10586.585 - 10636.997: 95.8523% ( 8) 00:07:43.838 10636.997 - 10687.409: 95.9091% ( 10) 00:07:43.838 10687.409 - 10737.822: 96.0000% ( 16) 00:07:43.838 10737.822 - 10788.234: 96.0795% ( 14) 00:07:43.838 10788.234 - 10838.646: 96.1591% ( 14) 00:07:43.838 10838.646 - 10889.058: 96.2273% ( 12) 00:07:43.838 10889.058 - 10939.471: 96.3011% ( 13) 00:07:43.838 10939.471 - 10989.883: 96.3920% ( 16) 00:07:43.838 10989.883 - 11040.295: 96.4432% ( 9) 00:07:43.838 11040.295 - 11090.708: 96.5114% ( 12) 00:07:43.838 11090.708 - 11141.120: 96.5682% ( 10) 00:07:43.839 11141.120 - 11191.532: 96.6477% ( 14) 00:07:43.839 11191.532 - 11241.945: 96.6932% ( 8) 00:07:43.839 11241.945 - 11292.357: 96.7443% ( 9) 00:07:43.839 11292.357 - 11342.769: 96.7898% ( 8) 00:07:43.839 11342.769 - 11393.182: 96.8068% ( 3) 00:07:43.839 11393.182 - 11443.594: 96.8295% ( 4) 00:07:43.839 11443.594 - 11494.006: 96.8466% ( 3) 00:07:43.839 11494.006 - 11544.418: 96.8864% ( 7) 00:07:43.839 11544.418 - 11594.831: 96.9205% ( 6) 00:07:43.839 11594.831 - 11645.243: 96.9716% ( 9) 00:07:43.839 11645.243 - 11695.655: 97.0057% ( 6) 00:07:43.839 11695.655 - 11746.068: 97.0568% ( 9) 00:07:43.839 11746.068 - 11796.480: 97.1023% ( 8) 00:07:43.839 11796.480 - 11846.892: 97.1364% ( 6) 00:07:43.839 11846.892 - 11897.305: 97.1705% ( 6) 00:07:43.839 11897.305 - 11947.717: 97.2443% ( 13) 00:07:43.839 11947.717 - 11998.129: 97.3011% ( 10) 00:07:43.839 11998.129 - 12048.542: 97.3807% ( 14) 00:07:43.839 12048.542 - 12098.954: 97.5341% ( 27) 00:07:43.839 12098.954 - 12149.366: 97.6250% ( 16) 00:07:43.839 12149.366 - 12199.778: 97.7102% ( 15) 00:07:43.839 12199.778 - 12250.191: 97.7955% ( 15) 00:07:43.839 12250.191 - 12300.603: 97.8352% ( 7) 00:07:43.839 12300.603 - 12351.015: 97.8693% ( 6) 00:07:43.839 12351.015 - 12401.428: 97.9034% ( 6) 00:07:43.839 12401.428 - 12451.840: 97.9375% ( 6) 00:07:43.839 12451.840 - 12502.252: 97.9886% ( 9) 00:07:43.839 12502.252 - 12552.665: 98.0568% ( 12) 00:07:43.839 12552.665 - 12603.077: 98.1307% ( 13) 00:07:43.839 12603.077 - 12653.489: 98.2443% ( 20) 00:07:43.839 12653.489 - 12703.902: 98.3182% ( 13) 00:07:43.839 12703.902 - 12754.314: 98.3750% ( 10) 00:07:43.839 12754.314 - 12804.726: 98.4318% ( 10) 00:07:43.839 12804.726 - 12855.138: 98.5057% ( 13) 00:07:43.839 12855.138 - 12905.551: 98.7216% ( 38) 00:07:43.839 12905.551 - 13006.375: 98.9545% ( 41) 00:07:43.839 13006.375 - 13107.200: 99.1193% ( 29) 00:07:43.839 13107.200 - 13208.025: 99.1932% ( 13) 00:07:43.840 13208.025 - 13308.849: 99.2159% ( 4) 00:07:43.840 13308.849 - 13409.674: 99.2330% ( 3) 00:07:43.840 13409.674 - 13510.498: 99.2614% ( 5) 00:07:43.840 13510.498 - 13611.323: 99.2727% ( 2) 00:07:43.840 22685.538 - 22786.363: 99.2784% ( 1) 00:07:43.840 22786.363 - 22887.188: 99.3011% ( 4) 00:07:43.840 22887.188 - 22988.012: 99.3182% ( 3) 00:07:43.840 22988.012 - 23088.837: 99.3409% ( 4) 00:07:43.840 23088.837 - 23189.662: 99.3636% ( 4) 00:07:43.840 23189.662 - 23290.486: 99.3864% ( 4) 00:07:43.840 23290.486 - 23391.311: 99.4091% ( 4) 00:07:43.840 23391.311 - 23492.135: 99.4318% ( 4) 00:07:43.840 23492.135 - 23592.960: 99.4545% ( 4) 00:07:43.840 23592.960 - 23693.785: 99.4773% ( 4) 00:07:43.840 23693.785 - 23794.609: 99.5000% ( 4) 00:07:43.840 23794.609 - 23895.434: 99.5227% ( 4) 00:07:43.840 23895.434 - 23996.258: 99.5455% ( 4) 00:07:43.840 23996.258 - 24097.083: 99.5682% ( 4) 00:07:43.840 24097.083 - 24197.908: 99.5852% ( 3) 00:07:43.840 24197.908 - 24298.732: 99.6136% ( 5) 00:07:43.840 24298.732 - 24399.557: 99.6307% ( 3) 00:07:43.840 24399.557 - 24500.382: 99.6364% ( 1) 00:07:43.840 27020.997 - 27222.646: 99.6648% ( 5) 00:07:43.840 27222.646 - 27424.295: 99.7045% ( 7) 00:07:43.840 27424.295 - 27625.945: 99.7443% ( 7) 00:07:43.840 27625.945 - 27827.594: 99.7898% ( 8) 00:07:43.840 27827.594 - 28029.243: 99.8295% ( 7) 00:07:43.840 28029.243 - 28230.892: 99.8693% ( 7) 00:07:43.840 28230.892 - 28432.542: 99.9091% ( 7) 00:07:43.840 28432.542 - 28634.191: 99.9545% ( 8) 00:07:43.840 28634.191 - 28835.840: 100.0000% ( 8) 00:07:43.840 00:07:43.841 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:43.841 ============================================================================== 00:07:43.841 Range in us Cumulative IO count 00:07:43.841 5620.972 - 5646.178: 0.0057% ( 1) 00:07:43.841 5646.178 - 5671.385: 0.0170% ( 2) 00:07:43.841 5747.003 - 5772.209: 0.0227% ( 1) 00:07:43.841 5847.828 - 5873.034: 0.0398% ( 3) 00:07:43.841 5873.034 - 5898.240: 0.0852% ( 8) 00:07:43.841 5898.240 - 5923.446: 0.1364% ( 9) 00:07:43.841 5923.446 - 5948.652: 0.2102% ( 13) 00:07:43.841 5948.652 - 5973.858: 0.3068% ( 17) 00:07:43.841 5973.858 - 5999.065: 0.4432% ( 24) 00:07:43.841 5999.065 - 6024.271: 0.6932% ( 44) 00:07:43.841 6024.271 - 6049.477: 0.9432% ( 44) 00:07:43.841 6049.477 - 6074.683: 1.2500% ( 54) 00:07:43.841 6074.683 - 6099.889: 1.6080% ( 63) 00:07:43.841 6099.889 - 6125.095: 2.0398% ( 76) 00:07:43.841 6125.095 - 6150.302: 2.7216% ( 120) 00:07:43.841 6150.302 - 6175.508: 3.3125% ( 104) 00:07:43.841 6175.508 - 6200.714: 4.0682% ( 133) 00:07:43.841 6200.714 - 6225.920: 5.0057% ( 165) 00:07:43.841 6225.920 - 6251.126: 6.0568% ( 185) 00:07:43.841 6251.126 - 6276.332: 7.2045% ( 202) 00:07:43.841 6276.332 - 6301.538: 8.3807% ( 207) 00:07:43.841 6301.538 - 6326.745: 9.8977% ( 267) 00:07:43.841 6326.745 - 6351.951: 11.7102% ( 319) 00:07:43.841 6351.951 - 6377.157: 14.0966% ( 420) 00:07:43.841 6377.157 - 6402.363: 15.9148% ( 320) 00:07:43.841 6402.363 - 6427.569: 18.1136% ( 387) 00:07:43.841 6427.569 - 6452.775: 20.1875% ( 365) 00:07:43.841 6452.775 - 6503.188: 25.2614% ( 893) 00:07:43.841 6503.188 - 6553.600: 29.5114% ( 748) 00:07:43.841 6553.600 - 6604.012: 35.6193% ( 1075) 00:07:43.842 6604.012 - 6654.425: 40.9545% ( 939) 00:07:43.842 6654.425 - 6704.837: 46.0170% ( 891) 00:07:43.842 6704.837 - 6755.249: 50.6705% ( 819) 00:07:43.842 6755.249 - 6805.662: 55.0568% ( 772) 00:07:43.842 6805.662 - 6856.074: 59.2045% ( 730) 00:07:43.842 6856.074 - 6906.486: 62.4773% ( 576) 00:07:43.842 6906.486 - 6956.898: 65.3125% ( 499) 00:07:43.842 6956.898 - 7007.311: 67.5170% ( 388) 00:07:43.842 7007.311 - 7057.723: 69.4148% ( 334) 00:07:43.842 7057.723 - 7108.135: 70.9659% ( 273) 00:07:43.842 7108.135 - 7158.548: 72.1136% ( 202) 00:07:43.842 7158.548 - 7208.960: 73.1420% ( 181) 00:07:43.842 7208.960 - 7259.372: 74.4943% ( 238) 00:07:43.842 7259.372 - 7309.785: 75.5682% ( 189) 00:07:43.842 7309.785 - 7360.197: 76.7159% ( 202) 00:07:43.842 7360.197 - 7410.609: 77.4318% ( 126) 00:07:43.842 7410.609 - 7461.022: 78.4830% ( 185) 00:07:43.842 7461.022 - 7511.434: 79.3864% ( 159) 00:07:43.842 7511.434 - 7561.846: 80.4148% ( 181) 00:07:43.842 7561.846 - 7612.258: 81.0682% ( 115) 00:07:43.842 7612.258 - 7662.671: 81.9148% ( 149) 00:07:43.842 7662.671 - 7713.083: 82.7386% ( 145) 00:07:43.842 7713.083 - 7763.495: 83.2670% ( 93) 00:07:43.842 7763.495 - 7813.908: 83.7386% ( 83) 00:07:43.842 7813.908 - 7864.320: 84.1420% ( 71) 00:07:43.842 7864.320 - 7914.732: 84.7330% ( 104) 00:07:43.842 7914.732 - 7965.145: 85.2216% ( 86) 00:07:43.842 7965.145 - 8015.557: 85.5795% ( 63) 00:07:43.842 8015.557 - 8065.969: 86.1023% ( 92) 00:07:43.842 8065.969 - 8116.382: 86.5739% ( 83) 00:07:43.842 8116.382 - 8166.794: 87.0795% ( 89) 00:07:43.842 8166.794 - 8217.206: 87.5284% ( 79) 00:07:43.842 8217.206 - 8267.618: 88.3977% ( 153) 00:07:43.842 8267.618 - 8318.031: 88.8580% ( 81) 00:07:43.842 8318.031 - 8368.443: 89.3068% ( 79) 00:07:43.842 8368.443 - 8418.855: 89.7216% ( 73) 00:07:43.842 8418.855 - 8469.268: 90.1420% ( 74) 00:07:43.842 8469.268 - 8519.680: 90.5682% ( 75) 00:07:43.842 8519.680 - 8570.092: 90.9886% ( 74) 00:07:43.842 8570.092 - 8620.505: 91.3068% ( 56) 00:07:43.842 8620.505 - 8670.917: 91.6477% ( 60) 00:07:43.842 8670.917 - 8721.329: 92.0114% ( 64) 00:07:43.842 8721.329 - 8771.742: 92.2614% ( 44) 00:07:43.842 8771.742 - 8822.154: 92.5341% ( 48) 00:07:43.842 8822.154 - 8872.566: 92.6989% ( 29) 00:07:43.842 8872.566 - 8922.978: 92.8636% ( 29) 00:07:43.842 8922.978 - 8973.391: 93.0170% ( 27) 00:07:43.842 8973.391 - 9023.803: 93.1591% ( 25) 00:07:43.842 9023.803 - 9074.215: 93.3068% ( 26) 00:07:43.842 9074.215 - 9124.628: 93.4602% ( 27) 00:07:43.842 9124.628 - 9175.040: 93.5511% ( 16) 00:07:43.842 9175.040 - 9225.452: 93.6307% ( 14) 00:07:43.842 9225.452 - 9275.865: 93.6761% ( 8) 00:07:43.842 9275.865 - 9326.277: 93.7727% ( 17) 00:07:43.842 9326.277 - 9376.689: 93.8807% ( 19) 00:07:43.843 9376.689 - 9427.102: 94.0852% ( 36) 00:07:43.843 9427.102 - 9477.514: 94.2159% ( 23) 00:07:43.843 9477.514 - 9527.926: 94.3182% ( 18) 00:07:43.843 9527.926 - 9578.338: 94.3693% ( 9) 00:07:43.843 9578.338 - 9628.751: 94.4205% ( 9) 00:07:43.843 9628.751 - 9679.163: 94.4773% ( 10) 00:07:43.843 9679.163 - 9729.575: 94.5341% ( 10) 00:07:43.843 9729.575 - 9779.988: 94.5852% ( 9) 00:07:43.843 9779.988 - 9830.400: 94.6477% ( 11) 00:07:43.843 9830.400 - 9880.812: 94.6932% ( 8) 00:07:43.843 9880.812 - 9931.225: 94.7386% ( 8) 00:07:43.843 9931.225 - 9981.637: 94.7784% ( 7) 00:07:43.843 9981.637 - 10032.049: 94.8352% ( 10) 00:07:43.843 10032.049 - 10082.462: 94.9318% ( 17) 00:07:43.843 10082.462 - 10132.874: 95.0227% ( 16) 00:07:43.843 10132.874 - 10183.286: 95.1193% ( 17) 00:07:43.843 10183.286 - 10233.698: 95.1932% ( 13) 00:07:43.843 10233.698 - 10284.111: 95.2955% ( 18) 00:07:43.843 10284.111 - 10334.523: 95.5398% ( 43) 00:07:43.843 10334.523 - 10384.935: 95.6364% ( 17) 00:07:43.843 10384.935 - 10435.348: 95.7159% ( 14) 00:07:43.843 10435.348 - 10485.760: 95.8068% ( 16) 00:07:43.843 10485.760 - 10536.172: 95.8750% ( 12) 00:07:43.843 10536.172 - 10586.585: 95.9432% ( 12) 00:07:43.843 10586.585 - 10636.997: 95.9943% ( 9) 00:07:43.843 10636.997 - 10687.409: 96.0455% ( 9) 00:07:43.843 10687.409 - 10737.822: 96.0909% ( 8) 00:07:43.843 10737.822 - 10788.234: 96.1477% ( 10) 00:07:43.843 10788.234 - 10838.646: 96.1989% ( 9) 00:07:43.843 10838.646 - 10889.058: 96.2500% ( 9) 00:07:43.843 10889.058 - 10939.471: 96.2955% ( 8) 00:07:43.843 10939.471 - 10989.883: 96.3295% ( 6) 00:07:43.843 10989.883 - 11040.295: 96.3466% ( 3) 00:07:43.843 11040.295 - 11090.708: 96.3693% ( 4) 00:07:43.843 11090.708 - 11141.120: 96.3920% ( 4) 00:07:43.843 11141.120 - 11191.532: 96.4091% ( 3) 00:07:43.843 11191.532 - 11241.945: 96.4261% ( 3) 00:07:43.843 11241.945 - 11292.357: 96.4886% ( 11) 00:07:43.843 11292.357 - 11342.769: 96.5341% ( 8) 00:07:43.843 11342.769 - 11393.182: 96.5568% ( 4) 00:07:43.843 11393.182 - 11443.594: 96.5909% ( 6) 00:07:43.843 11443.594 - 11494.006: 96.6477% ( 10) 00:07:43.844 11494.006 - 11544.418: 96.6932% ( 8) 00:07:43.844 11544.418 - 11594.831: 96.8182% ( 22) 00:07:43.844 11594.831 - 11645.243: 96.9545% ( 24) 00:07:43.844 11645.243 - 11695.655: 97.0398% ( 15) 00:07:43.844 11695.655 - 11746.068: 97.1307% ( 16) 00:07:43.844 11746.068 - 11796.480: 97.1761% ( 8) 00:07:43.844 11796.480 - 11846.892: 97.2670% ( 16) 00:07:43.844 11846.892 - 11897.305: 97.3295% ( 11) 00:07:43.844 11897.305 - 11947.717: 97.4034% ( 13) 00:07:43.844 11947.717 - 11998.129: 97.4716% ( 12) 00:07:43.844 11998.129 - 12048.542: 97.5284% ( 10) 00:07:43.844 12048.542 - 12098.954: 97.5625% ( 6) 00:07:43.844 12098.954 - 12149.366: 97.5909% ( 5) 00:07:43.844 12149.366 - 12199.778: 97.6080% ( 3) 00:07:43.844 12199.778 - 12250.191: 97.6591% ( 9) 00:07:43.844 12250.191 - 12300.603: 97.6875% ( 5) 00:07:43.844 12300.603 - 12351.015: 97.7216% ( 6) 00:07:43.844 12351.015 - 12401.428: 97.8693% ( 26) 00:07:43.844 12401.428 - 12451.840: 97.9602% ( 16) 00:07:43.844 12451.840 - 12502.252: 98.0341% ( 13) 00:07:43.844 12502.252 - 12552.665: 98.1193% ( 15) 00:07:43.844 12552.665 - 12603.077: 98.1989% ( 14) 00:07:43.844 12603.077 - 12653.489: 98.2784% ( 14) 00:07:43.844 12653.489 - 12703.902: 98.3409% ( 11) 00:07:43.844 12703.902 - 12754.314: 98.4091% ( 12) 00:07:43.844 12754.314 - 12804.726: 98.4716% ( 11) 00:07:43.844 12804.726 - 12855.138: 98.6080% ( 24) 00:07:43.844 12855.138 - 12905.551: 98.6591% ( 9) 00:07:43.844 12905.551 - 13006.375: 98.7614% ( 18) 00:07:43.844 13006.375 - 13107.200: 98.8693% ( 19) 00:07:43.844 13107.200 - 13208.025: 98.9886% ( 21) 00:07:43.844 13208.025 - 13308.849: 99.1534% ( 29) 00:07:43.844 13308.849 - 13409.674: 99.2045% ( 9) 00:07:43.844 13409.674 - 13510.498: 99.2727% ( 12) 00:07:43.844 20971.520 - 21072.345: 99.2955% ( 4) 00:07:43.844 21072.345 - 21173.169: 99.3182% ( 4) 00:07:43.844 21173.169 - 21273.994: 99.3352% ( 3) 00:07:43.844 21273.994 - 21374.818: 99.3580% ( 4) 00:07:43.844 21374.818 - 21475.643: 99.3807% ( 4) 00:07:43.844 21475.643 - 21576.468: 99.4034% ( 4) 00:07:43.844 21576.468 - 21677.292: 99.4261% ( 4) 00:07:43.845 21677.292 - 21778.117: 99.4489% ( 4) 00:07:43.845 21778.117 - 21878.942: 99.4659% ( 3) 00:07:43.845 21878.942 - 21979.766: 99.4886% ( 4) 00:07:43.845 21979.766 - 22080.591: 99.5114% ( 4) 00:07:43.845 22080.591 - 22181.415: 99.5341% ( 4) 00:07:43.845 22181.415 - 22282.240: 99.5568% ( 4) 00:07:43.845 22282.240 - 22383.065: 99.5795% ( 4) 00:07:43.845 22383.065 - 22483.889: 99.6023% ( 4) 00:07:43.845 22483.889 - 22584.714: 99.6250% ( 4) 00:07:43.845 22584.714 - 22685.538: 99.6364% ( 2) 00:07:43.845 25306.978 - 25407.803: 99.6591% ( 4) 00:07:43.845 25407.803 - 25508.628: 99.6761% ( 3) 00:07:43.845 25508.628 - 25609.452: 99.6989% ( 4) 00:07:43.845 25609.452 - 25710.277: 99.7273% ( 5) 00:07:43.845 25710.277 - 25811.102: 99.7500% ( 4) 00:07:43.845 25811.102 - 26012.751: 99.7955% ( 8) 00:07:43.845 26012.751 - 26214.400: 99.8409% ( 8) 00:07:43.845 26214.400 - 26416.049: 99.8864% ( 8) 00:07:43.845 26416.049 - 26617.698: 99.9261% ( 7) 00:07:43.845 26617.698 - 26819.348: 99.9659% ( 7) 00:07:43.845 26819.348 - 27020.997: 100.0000% ( 6) 00:07:43.845 00:07:43.845 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:43.845 ============================================================================== 00:07:43.845 Range in us Cumulative IO count 00:07:43.846 5772.209 - 5797.415: 0.0057% ( 1) 00:07:43.846 5822.622 - 5847.828: 0.0227% ( 3) 00:07:43.846 5847.828 - 5873.034: 0.0455% ( 4) 00:07:43.846 5873.034 - 5898.240: 0.0909% ( 8) 00:07:43.846 5898.240 - 5923.446: 0.1875% ( 17) 00:07:43.846 5923.446 - 5948.652: 0.2841% ( 17) 00:07:43.846 5948.652 - 5973.858: 0.4091% ( 22) 00:07:43.846 5973.858 - 5999.065: 0.5455% ( 24) 00:07:43.846 5999.065 - 6024.271: 0.8068% ( 46) 00:07:43.846 6024.271 - 6049.477: 1.0511% ( 43) 00:07:43.846 6049.477 - 6074.683: 1.5000% ( 79) 00:07:43.846 6074.683 - 6099.889: 1.7898% ( 51) 00:07:43.846 6099.889 - 6125.095: 2.2670% ( 84) 00:07:43.846 6125.095 - 6150.302: 2.7841% ( 91) 00:07:43.846 6150.302 - 6175.508: 3.3466% ( 99) 00:07:43.846 6175.508 - 6200.714: 4.2102% ( 152) 00:07:43.846 6200.714 - 6225.920: 4.9489% ( 130) 00:07:43.846 6225.920 - 6251.126: 6.1705% ( 215) 00:07:43.846 6251.126 - 6276.332: 7.2159% ( 184) 00:07:43.846 6276.332 - 6301.538: 8.6875% ( 259) 00:07:43.846 6301.538 - 6326.745: 10.0227% ( 235) 00:07:43.846 6326.745 - 6351.951: 11.8182% ( 316) 00:07:43.846 6351.951 - 6377.157: 13.6648% ( 325) 00:07:43.846 6377.157 - 6402.363: 15.5341% ( 329) 00:07:43.846 6402.363 - 6427.569: 17.9659% ( 428) 00:07:43.846 6427.569 - 6452.775: 20.2670% ( 405) 00:07:43.846 6452.775 - 6503.188: 24.7159% ( 783) 00:07:43.846 6503.188 - 6553.600: 29.4943% ( 841) 00:07:43.846 6553.600 - 6604.012: 34.8352% ( 940) 00:07:43.846 6604.012 - 6654.425: 40.3920% ( 978) 00:07:43.846 6654.425 - 6704.837: 45.5170% ( 902) 00:07:43.846 6704.837 - 6755.249: 50.4318% ( 865) 00:07:43.846 6755.249 - 6805.662: 54.5284% ( 721) 00:07:43.846 6805.662 - 6856.074: 58.5455% ( 707) 00:07:43.847 6856.074 - 6906.486: 62.2614% ( 654) 00:07:43.847 6906.486 - 6956.898: 65.2443% ( 525) 00:07:43.847 6956.898 - 7007.311: 67.3580% ( 372) 00:07:43.847 7007.311 - 7057.723: 69.3807% ( 356) 00:07:43.847 7057.723 - 7108.135: 71.1705% ( 315) 00:07:43.847 7108.135 - 7158.548: 72.6193% ( 255) 00:07:43.847 7158.548 - 7208.960: 73.6193% ( 176) 00:07:43.847 7208.960 - 7259.372: 74.6420% ( 180) 00:07:43.847 7259.372 - 7309.785: 75.5852% ( 166) 00:07:43.847 7309.785 - 7360.197: 76.5284% ( 166) 00:07:43.847 7360.197 - 7410.609: 77.3580% ( 146) 00:07:43.847 7410.609 - 7461.022: 78.4489% ( 192) 00:07:43.847 7461.022 - 7511.434: 79.4659% ( 179) 00:07:43.847 7511.434 - 7561.846: 80.4261% ( 169) 00:07:43.847 7561.846 - 7612.258: 81.1534% ( 128) 00:07:43.847 7612.258 - 7662.671: 82.1193% ( 170) 00:07:43.847 7662.671 - 7713.083: 82.8977% ( 137) 00:07:43.847 7713.083 - 7763.495: 83.3807% ( 85) 00:07:43.847 7763.495 - 7813.908: 84.0455% ( 117) 00:07:43.847 7813.908 - 7864.320: 84.4886% ( 78) 00:07:43.847 7864.320 - 7914.732: 85.0568% ( 100) 00:07:43.847 7914.732 - 7965.145: 85.6364% ( 102) 00:07:43.847 7965.145 - 8015.557: 86.2330% ( 105) 00:07:43.847 8015.557 - 8065.969: 86.6364% ( 71) 00:07:43.847 8065.969 - 8116.382: 86.9830% ( 61) 00:07:43.847 8116.382 - 8166.794: 87.4034% ( 74) 00:07:43.847 8166.794 - 8217.206: 87.8352% ( 76) 00:07:43.847 8217.206 - 8267.618: 88.2784% ( 78) 00:07:43.847 8267.618 - 8318.031: 88.5966% ( 56) 00:07:43.847 8318.031 - 8368.443: 89.0227% ( 75) 00:07:43.847 8368.443 - 8418.855: 89.6136% ( 104) 00:07:43.847 8418.855 - 8469.268: 90.1705% ( 98) 00:07:43.847 8469.268 - 8519.680: 90.7500% ( 102) 00:07:43.847 8519.680 - 8570.092: 91.1705% ( 74) 00:07:43.847 8570.092 - 8620.505: 91.5966% ( 75) 00:07:43.847 8620.505 - 8670.917: 91.8636% ( 47) 00:07:43.847 8670.917 - 8721.329: 92.1023% ( 42) 00:07:43.847 8721.329 - 8771.742: 92.3864% ( 50) 00:07:43.847 8771.742 - 8822.154: 92.5511% ( 29) 00:07:43.847 8822.154 - 8872.566: 92.6875% ( 24) 00:07:43.847 8872.566 - 8922.978: 92.8182% ( 23) 00:07:43.847 8922.978 - 8973.391: 92.9261% ( 19) 00:07:43.847 8973.391 - 9023.803: 93.1364% ( 37) 00:07:43.847 9023.803 - 9074.215: 93.2273% ( 16) 00:07:43.847 9074.215 - 9124.628: 93.2955% ( 12) 00:07:43.847 9124.628 - 9175.040: 93.3864% ( 16) 00:07:43.847 9175.040 - 9225.452: 93.4716% ( 15) 00:07:43.847 9225.452 - 9275.865: 93.5625% ( 16) 00:07:43.847 9275.865 - 9326.277: 93.6477% ( 15) 00:07:43.847 9326.277 - 9376.689: 93.7273% ( 14) 00:07:43.847 9376.689 - 9427.102: 93.8523% ( 22) 00:07:43.847 9427.102 - 9477.514: 93.9148% ( 11) 00:07:43.847 9477.514 - 9527.926: 93.9886% ( 13) 00:07:43.847 9527.926 - 9578.338: 94.0739% ( 15) 00:07:43.847 9578.338 - 9628.751: 94.1705% ( 17) 00:07:43.847 9628.751 - 9679.163: 94.2500% ( 14) 00:07:43.847 9679.163 - 9729.575: 94.3352% ( 15) 00:07:43.847 9729.575 - 9779.988: 94.4545% ( 21) 00:07:43.847 9779.988 - 9830.400: 94.6136% ( 28) 00:07:43.847 9830.400 - 9880.812: 94.7727% ( 28) 00:07:43.847 9880.812 - 9931.225: 94.9432% ( 30) 00:07:43.847 9931.225 - 9981.637: 95.1080% ( 29) 00:07:43.847 9981.637 - 10032.049: 95.2386% ( 23) 00:07:43.847 10032.049 - 10082.462: 95.3125% ( 13) 00:07:43.847 10082.462 - 10132.874: 95.3693% ( 10) 00:07:43.847 10132.874 - 10183.286: 95.4261% ( 10) 00:07:43.847 10183.286 - 10233.698: 95.4830% ( 10) 00:07:43.847 10233.698 - 10284.111: 95.5341% ( 9) 00:07:43.847 10284.111 - 10334.523: 95.5852% ( 9) 00:07:43.847 10334.523 - 10384.935: 95.6250% ( 7) 00:07:43.847 10384.935 - 10435.348: 95.6420% ( 3) 00:07:43.847 10536.172 - 10586.585: 95.6761% ( 6) 00:07:43.847 10586.585 - 10636.997: 95.7159% ( 7) 00:07:43.847 10636.997 - 10687.409: 95.7614% ( 8) 00:07:43.847 10687.409 - 10737.822: 95.8636% ( 18) 00:07:43.847 10737.822 - 10788.234: 95.9148% ( 9) 00:07:43.847 10788.234 - 10838.646: 95.9659% ( 9) 00:07:43.847 10838.646 - 10889.058: 95.9943% ( 5) 00:07:43.847 10889.058 - 10939.471: 96.0170% ( 4) 00:07:43.847 10939.471 - 10989.883: 96.0568% ( 7) 00:07:43.847 10989.883 - 11040.295: 96.0795% ( 4) 00:07:43.847 11040.295 - 11090.708: 96.1420% ( 11) 00:07:43.847 11090.708 - 11141.120: 96.1989% ( 10) 00:07:43.847 11141.120 - 11191.532: 96.2784% ( 14) 00:07:43.847 11191.532 - 11241.945: 96.3636% ( 15) 00:07:43.847 11241.945 - 11292.357: 96.4830% ( 21) 00:07:43.847 11292.357 - 11342.769: 96.5568% ( 13) 00:07:43.847 11342.769 - 11393.182: 96.6307% ( 13) 00:07:43.847 11393.182 - 11443.594: 96.6932% ( 11) 00:07:43.847 11443.594 - 11494.006: 96.7727% ( 14) 00:07:43.847 11494.006 - 11544.418: 96.8693% ( 17) 00:07:43.847 11544.418 - 11594.831: 96.9432% ( 13) 00:07:43.847 11594.831 - 11645.243: 97.0511% ( 19) 00:07:43.847 11645.243 - 11695.655: 97.1193% ( 12) 00:07:43.847 11695.655 - 11746.068: 97.1932% ( 13) 00:07:43.847 11746.068 - 11796.480: 97.2500% ( 10) 00:07:43.847 11796.480 - 11846.892: 97.3011% ( 9) 00:07:43.847 11846.892 - 11897.305: 97.3693% ( 12) 00:07:43.847 11897.305 - 11947.717: 97.4148% ( 8) 00:07:43.847 11947.717 - 11998.129: 97.4432% ( 5) 00:07:43.847 11998.129 - 12048.542: 97.4716% ( 5) 00:07:43.847 12048.542 - 12098.954: 97.5227% ( 9) 00:07:43.847 12098.954 - 12149.366: 97.5511% ( 5) 00:07:43.847 12149.366 - 12199.778: 97.5966% ( 8) 00:07:43.847 12199.778 - 12250.191: 97.6364% ( 7) 00:07:43.847 12250.191 - 12300.603: 97.7216% ( 15) 00:07:43.847 12300.603 - 12351.015: 97.9091% ( 33) 00:07:43.847 12351.015 - 12401.428: 97.9659% ( 10) 00:07:43.847 12401.428 - 12451.840: 98.0114% ( 8) 00:07:43.847 12451.840 - 12502.252: 98.0682% ( 10) 00:07:43.847 12502.252 - 12552.665: 98.1136% ( 8) 00:07:43.847 12552.665 - 12603.077: 98.1534% ( 7) 00:07:43.847 12603.077 - 12653.489: 98.2102% ( 10) 00:07:43.847 12653.489 - 12703.902: 98.2557% ( 8) 00:07:43.847 12703.902 - 12754.314: 98.3011% ( 8) 00:07:43.847 12754.314 - 12804.726: 98.3523% ( 9) 00:07:43.847 12804.726 - 12855.138: 98.4034% ( 9) 00:07:43.847 12855.138 - 12905.551: 98.4432% ( 7) 00:07:43.847 12905.551 - 13006.375: 98.5795% ( 24) 00:07:43.847 13006.375 - 13107.200: 98.7557% ( 31) 00:07:43.847 13107.200 - 13208.025: 98.9318% ( 31) 00:07:43.847 13208.025 - 13308.849: 99.1420% ( 37) 00:07:43.847 13308.849 - 13409.674: 99.2216% ( 14) 00:07:43.847 13409.674 - 13510.498: 99.2670% ( 8) 00:07:43.847 13510.498 - 13611.323: 99.2727% ( 1) 00:07:43.847 19156.677 - 19257.502: 99.2955% ( 4) 00:07:43.847 19257.502 - 19358.326: 99.3182% ( 4) 00:07:43.847 19358.326 - 19459.151: 99.3409% ( 4) 00:07:43.847 19459.151 - 19559.975: 99.3636% ( 4) 00:07:43.848 19559.975 - 19660.800: 99.3864% ( 4) 00:07:43.848 19660.800 - 19761.625: 99.4091% ( 4) 00:07:43.848 19761.625 - 19862.449: 99.4318% ( 4) 00:07:43.848 19862.449 - 19963.274: 99.4545% ( 4) 00:07:43.848 19963.274 - 20064.098: 99.4773% ( 4) 00:07:43.848 20064.098 - 20164.923: 99.5000% ( 4) 00:07:43.848 20164.923 - 20265.748: 99.5284% ( 5) 00:07:43.848 20265.748 - 20366.572: 99.5511% ( 4) 00:07:43.848 20366.572 - 20467.397: 99.5682% ( 3) 00:07:43.848 20467.397 - 20568.222: 99.5909% ( 4) 00:07:43.848 20568.222 - 20669.046: 99.6136% ( 4) 00:07:43.848 20669.046 - 20769.871: 99.6364% ( 4) 00:07:43.848 23492.135 - 23592.960: 99.6591% ( 4) 00:07:43.848 23592.960 - 23693.785: 99.6818% ( 4) 00:07:43.848 23693.785 - 23794.609: 99.7045% ( 4) 00:07:43.848 23794.609 - 23895.434: 99.7273% ( 4) 00:07:43.848 23895.434 - 23996.258: 99.7443% ( 3) 00:07:43.848 23996.258 - 24097.083: 99.7670% ( 4) 00:07:43.848 24097.083 - 24197.908: 99.7898% ( 4) 00:07:43.848 24197.908 - 24298.732: 99.8125% ( 4) 00:07:43.848 24298.732 - 24399.557: 99.8352% ( 4) 00:07:43.848 24399.557 - 24500.382: 99.8580% ( 4) 00:07:43.848 24500.382 - 24601.206: 99.8807% ( 4) 00:07:43.848 24601.206 - 24702.031: 99.9034% ( 4) 00:07:43.848 24702.031 - 24802.855: 99.9261% ( 4) 00:07:43.848 24802.855 - 24903.680: 99.9489% ( 4) 00:07:43.848 24903.680 - 25004.505: 99.9716% ( 4) 00:07:43.848 25004.505 - 25105.329: 99.9943% ( 4) 00:07:43.848 25105.329 - 25206.154: 100.0000% ( 1) 00:07:43.848 00:07:43.848 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:43.848 ============================================================================== 00:07:43.848 Range in us Cumulative IO count 00:07:43.848 5646.178 - 5671.385: 0.0057% ( 1) 00:07:43.848 5696.591 - 5721.797: 0.0113% ( 1) 00:07:43.848 5772.209 - 5797.415: 0.0170% ( 1) 00:07:43.848 5797.415 - 5822.622: 0.0283% ( 2) 00:07:43.848 5822.622 - 5847.828: 0.0453% ( 3) 00:07:43.848 5847.828 - 5873.034: 0.0849% ( 7) 00:07:43.848 5873.034 - 5898.240: 0.1245% ( 7) 00:07:43.848 5898.240 - 5923.446: 0.1981% ( 13) 00:07:43.848 5923.446 - 5948.652: 0.2944% ( 17) 00:07:43.848 5948.652 - 5973.858: 0.3906% ( 17) 00:07:43.848 5973.858 - 5999.065: 0.5208% ( 23) 00:07:43.848 5999.065 - 6024.271: 0.8492% ( 58) 00:07:43.848 6024.271 - 6049.477: 1.2341% ( 68) 00:07:43.848 6049.477 - 6074.683: 1.6361% ( 71) 00:07:43.848 6074.683 - 6099.889: 1.8625% ( 40) 00:07:43.848 6099.889 - 6125.095: 2.2362% ( 66) 00:07:43.848 6125.095 - 6150.302: 2.7570% ( 92) 00:07:43.848 6150.302 - 6175.508: 3.1533% ( 70) 00:07:43.848 6175.508 - 6200.714: 3.7534% ( 106) 00:07:43.848 6200.714 - 6225.920: 4.7668% ( 179) 00:07:43.848 6225.920 - 6251.126: 5.9386% ( 207) 00:07:43.848 6251.126 - 6276.332: 7.0086% ( 189) 00:07:43.848 6276.332 - 6301.538: 8.3447% ( 236) 00:07:43.848 6301.538 - 6326.745: 9.8336% ( 263) 00:07:43.848 6326.745 - 6351.951: 12.1886% ( 416) 00:07:43.848 6351.951 - 6377.157: 13.7851% ( 282) 00:07:43.848 6377.157 - 6402.363: 15.6929% ( 337) 00:07:43.848 6402.363 - 6427.569: 17.5385% ( 326) 00:07:43.848 6427.569 - 6452.775: 19.8483% ( 408) 00:07:43.848 6452.775 - 6503.188: 25.0510% ( 919) 00:07:43.848 6503.188 - 6553.600: 29.7554% ( 831) 00:07:43.848 6553.600 - 6604.012: 35.1619% ( 955) 00:07:43.848 6604.012 - 6654.425: 40.7439% ( 986) 00:07:43.848 6654.425 - 6704.837: 46.0824% ( 943) 00:07:43.848 6704.837 - 6755.249: 50.3170% ( 748) 00:07:43.848 6755.249 - 6805.662: 54.1667% ( 680) 00:07:43.848 6805.662 - 6856.074: 57.6427% ( 614) 00:07:43.848 6856.074 - 6906.486: 61.5659% ( 693) 00:07:43.848 6906.486 - 6956.898: 64.1418% ( 455) 00:07:43.848 6956.898 - 7007.311: 66.3723% ( 394) 00:07:43.848 7007.311 - 7057.723: 68.6481% ( 402) 00:07:43.848 7057.723 - 7108.135: 70.5389% ( 334) 00:07:43.848 7108.135 - 7158.548: 72.3336% ( 317) 00:07:43.848 7158.548 - 7208.960: 73.5224% ( 210) 00:07:43.848 7208.960 - 7259.372: 74.8358% ( 232) 00:07:43.848 7259.372 - 7309.785: 75.9794% ( 202) 00:07:43.848 7309.785 - 7360.197: 76.6814% ( 124) 00:07:43.848 7360.197 - 7410.609: 77.3041% ( 110) 00:07:43.848 7410.609 - 7461.022: 78.2099% ( 160) 00:07:43.848 7461.022 - 7511.434: 79.2289% ( 180) 00:07:43.848 7511.434 - 7561.846: 80.0951% ( 153) 00:07:43.848 7561.846 - 7612.258: 80.8028% ( 125) 00:07:43.848 7612.258 - 7662.671: 81.5048% ( 124) 00:07:43.848 7662.671 - 7713.083: 82.5861% ( 191) 00:07:43.848 7713.083 - 7763.495: 83.2994% ( 126) 00:07:43.848 7763.495 - 7813.908: 83.9561% ( 116) 00:07:43.848 7813.908 - 7864.320: 84.6920% ( 130) 00:07:43.848 7864.320 - 7914.732: 85.3261% ( 112) 00:07:43.848 7914.732 - 7965.145: 85.9432% ( 109) 00:07:43.848 7965.145 - 8015.557: 86.3281% ( 68) 00:07:43.848 8015.557 - 8065.969: 86.8093% ( 85) 00:07:43.848 8065.969 - 8116.382: 87.1377% ( 58) 00:07:43.848 8116.382 - 8166.794: 87.6019% ( 82) 00:07:43.848 8166.794 - 8217.206: 88.0378% ( 77) 00:07:43.848 8217.206 - 8267.618: 88.6549% ( 109) 00:07:43.848 8267.618 - 8318.031: 89.1191% ( 82) 00:07:43.848 8318.031 - 8368.443: 89.5890% ( 83) 00:07:43.848 8368.443 - 8418.855: 90.0702% ( 85) 00:07:43.848 8418.855 - 8469.268: 90.5854% ( 91) 00:07:43.848 8469.268 - 8519.680: 90.9477% ( 64) 00:07:43.848 8519.680 - 8570.092: 91.2421% ( 52) 00:07:43.848 8570.092 - 8620.505: 91.6270% ( 68) 00:07:43.848 8620.505 - 8670.917: 91.9724% ( 61) 00:07:43.848 8670.917 - 8721.329: 92.2045% ( 41) 00:07:43.848 8721.329 - 8771.742: 92.3970% ( 34) 00:07:43.848 8771.742 - 8822.154: 92.5045% ( 19) 00:07:43.848 8822.154 - 8872.566: 92.7027% ( 35) 00:07:43.848 8872.566 - 8922.978: 92.8612% ( 28) 00:07:43.848 8922.978 - 8973.391: 92.9574% ( 17) 00:07:43.848 8973.391 - 9023.803: 93.1556% ( 35) 00:07:43.848 9023.803 - 9074.215: 93.2405% ( 15) 00:07:43.848 9074.215 - 9124.628: 93.3367% ( 17) 00:07:43.848 9124.628 - 9175.040: 93.4443% ( 19) 00:07:43.848 9175.040 - 9225.452: 93.5405% ( 17) 00:07:43.848 9225.452 - 9275.865: 93.6085% ( 12) 00:07:43.848 9275.865 - 9326.277: 93.6990% ( 16) 00:07:43.848 9326.277 - 9376.689: 93.7840% ( 15) 00:07:43.848 9376.689 - 9427.102: 93.8519% ( 12) 00:07:43.848 9427.102 - 9477.514: 93.9764% ( 22) 00:07:43.848 9477.514 - 9527.926: 94.0727% ( 17) 00:07:43.848 9527.926 - 9578.338: 94.1576% ( 15) 00:07:43.848 9578.338 - 9628.751: 94.2369% ( 14) 00:07:43.848 9628.751 - 9679.163: 94.3048% ( 12) 00:07:43.848 9679.163 - 9729.575: 94.3784% ( 13) 00:07:43.848 9729.575 - 9779.988: 94.4463% ( 12) 00:07:43.848 9779.988 - 9830.400: 94.5369% ( 16) 00:07:43.848 9830.400 - 9880.812: 94.6275% ( 16) 00:07:43.848 9880.812 - 9931.225: 94.7011% ( 13) 00:07:43.848 9931.225 - 9981.637: 94.8030% ( 18) 00:07:43.848 9981.637 - 10032.049: 94.8879% ( 15) 00:07:43.848 10032.049 - 10082.462: 94.9445% ( 10) 00:07:43.848 10082.462 - 10132.874: 94.9898% ( 8) 00:07:43.848 10132.874 - 10183.286: 95.0181% ( 5) 00:07:43.848 10183.286 - 10233.698: 95.0351% ( 3) 00:07:43.848 10233.698 - 10284.111: 95.0577% ( 4) 00:07:43.848 10284.111 - 10334.523: 95.1030% ( 8) 00:07:43.848 10334.523 - 10384.935: 95.1313% ( 5) 00:07:43.848 10384.935 - 10435.348: 95.1823% ( 9) 00:07:43.848 10435.348 - 10485.760: 95.2502% ( 12) 00:07:43.848 10485.760 - 10536.172: 95.3635% ( 20) 00:07:43.848 10536.172 - 10586.585: 95.4937% ( 23) 00:07:43.848 10586.585 - 10636.997: 95.5559% ( 11) 00:07:43.848 10636.997 - 10687.409: 95.7314% ( 31) 00:07:43.848 10687.409 - 10737.822: 95.7937% ( 11) 00:07:43.848 10737.822 - 10788.234: 95.8277% ( 6) 00:07:43.848 10788.234 - 10838.646: 95.8730% ( 8) 00:07:43.848 10838.646 - 10889.058: 95.9069% ( 6) 00:07:43.848 10889.058 - 10939.471: 95.9409% ( 6) 00:07:43.848 10939.471 - 10989.883: 95.9862% ( 8) 00:07:43.848 10989.883 - 11040.295: 96.0258% ( 7) 00:07:43.848 11040.295 - 11090.708: 96.0824% ( 10) 00:07:43.848 11090.708 - 11141.120: 96.1390% ( 10) 00:07:43.848 11141.120 - 11191.532: 96.1900% ( 9) 00:07:43.848 11191.532 - 11241.945: 96.3995% ( 37) 00:07:43.848 11241.945 - 11292.357: 96.4447% ( 8) 00:07:43.848 11292.357 - 11342.769: 96.5240% ( 14) 00:07:43.848 11342.769 - 11393.182: 96.5976% ( 13) 00:07:43.848 11393.182 - 11443.594: 96.6712% ( 13) 00:07:43.848 11443.594 - 11494.006: 96.7957% ( 22) 00:07:43.848 11494.006 - 11544.418: 96.9486% ( 27) 00:07:43.848 11544.418 - 11594.831: 97.0618% ( 20) 00:07:43.848 11594.831 - 11645.243: 97.1241% ( 11) 00:07:43.848 11645.243 - 11695.655: 97.1920% ( 12) 00:07:43.848 11695.655 - 11746.068: 97.2656% ( 13) 00:07:43.848 11746.068 - 11796.480: 97.3619% ( 17) 00:07:43.848 11796.480 - 11846.892: 97.4638% ( 18) 00:07:43.848 11846.892 - 11897.305: 97.5430% ( 14) 00:07:43.848 11897.305 - 11947.717: 97.6449% ( 18) 00:07:43.848 11947.717 - 11998.129: 97.6902% ( 8) 00:07:43.848 11998.129 - 12048.542: 97.7525% ( 11) 00:07:43.848 12048.542 - 12098.954: 97.8261% ( 13) 00:07:43.848 12098.954 - 12149.366: 97.8714% ( 8) 00:07:43.848 12149.366 - 12199.778: 97.9223% ( 9) 00:07:43.848 12199.778 - 12250.191: 97.9903% ( 12) 00:07:43.848 12250.191 - 12300.603: 98.0639% ( 13) 00:07:43.848 12300.603 - 12351.015: 98.1375% ( 13) 00:07:43.848 12351.015 - 12401.428: 98.1884% ( 9) 00:07:43.848 12401.428 - 12451.840: 98.2337% ( 8) 00:07:43.848 12451.840 - 12502.252: 98.2903% ( 10) 00:07:43.848 12502.252 - 12552.665: 98.3356% ( 8) 00:07:43.848 12552.665 - 12603.077: 98.3696% ( 6) 00:07:43.848 12603.077 - 12653.489: 98.4205% ( 9) 00:07:43.848 12653.489 - 12703.902: 98.4488% ( 5) 00:07:43.848 12703.902 - 12754.314: 98.4885% ( 7) 00:07:43.848 12754.314 - 12804.726: 98.5111% ( 4) 00:07:43.848 12804.726 - 12855.138: 98.5394% ( 5) 00:07:43.848 12855.138 - 12905.551: 98.5847% ( 8) 00:07:43.848 12905.551 - 13006.375: 98.6866% ( 18) 00:07:43.848 13006.375 - 13107.200: 98.7885% ( 18) 00:07:43.848 13107.200 - 13208.025: 98.8678% ( 14) 00:07:43.848 13208.025 - 13308.849: 99.0772% ( 37) 00:07:43.848 13308.849 - 13409.674: 99.1225% ( 8) 00:07:43.848 13409.674 - 13510.498: 99.1791% ( 10) 00:07:43.848 13510.498 - 13611.323: 99.2301% ( 9) 00:07:43.849 13611.323 - 13712.148: 99.2697% ( 7) 00:07:43.849 13712.148 - 13812.972: 99.3150% ( 8) 00:07:43.849 13812.972 - 13913.797: 99.3546% ( 7) 00:07:43.849 13913.797 - 14014.622: 99.3942% ( 7) 00:07:43.849 14014.622 - 14115.446: 99.4169% ( 4) 00:07:43.849 14115.446 - 14216.271: 99.4395% ( 4) 00:07:43.849 14216.271 - 14317.095: 99.4622% ( 4) 00:07:43.849 14317.095 - 14417.920: 99.4905% ( 5) 00:07:43.849 14417.920 - 14518.745: 99.5131% ( 4) 00:07:43.849 14518.745 - 14619.569: 99.5301% ( 3) 00:07:43.849 14619.569 - 14720.394: 99.5584% ( 5) 00:07:43.849 14720.394 - 14821.218: 99.5811% ( 4) 00:07:43.849 14821.218 - 14922.043: 99.6037% ( 4) 00:07:43.849 14922.043 - 15022.868: 99.6320% ( 5) 00:07:43.849 15022.868 - 15123.692: 99.6377% ( 1) 00:07:43.849 18450.905 - 18551.729: 99.6603% ( 4) 00:07:43.849 18551.729 - 18652.554: 99.6830% ( 4) 00:07:43.849 18652.554 - 18753.378: 99.7056% ( 4) 00:07:43.849 18753.378 - 18854.203: 99.7283% ( 4) 00:07:43.849 18854.203 - 18955.028: 99.7509% ( 4) 00:07:43.849 18955.028 - 19055.852: 99.7736% ( 4) 00:07:43.849 19055.852 - 19156.677: 99.7962% ( 4) 00:07:43.849 19156.677 - 19257.502: 99.8188% ( 4) 00:07:43.849 19257.502 - 19358.326: 99.8358% ( 3) 00:07:43.849 19358.326 - 19459.151: 99.8528% ( 3) 00:07:43.849 19459.151 - 19559.975: 99.8811% ( 5) 00:07:43.849 19559.975 - 19660.800: 99.9038% ( 4) 00:07:43.849 19660.800 - 19761.625: 99.9264% ( 4) 00:07:43.849 19761.625 - 19862.449: 99.9490% ( 4) 00:07:43.849 19862.449 - 19963.274: 99.9717% ( 4) 00:07:43.849 19963.274 - 20064.098: 100.0000% ( 5) 00:07:43.849 00:07:43.849 12:40:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:43.849 00:07:43.849 real 0m2.499s 00:07:43.849 user 0m2.206s 00:07:43.849 sys 0m0.190s 00:07:43.849 12:40:09 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.849 12:40:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.849 ************************************ 00:07:43.849 END TEST nvme_perf 00:07:43.849 ************************************ 00:07:43.849 12:40:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:43.849 12:40:09 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:43.849 12:40:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.849 12:40:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.849 ************************************ 00:07:43.849 START TEST nvme_hello_world 00:07:43.849 ************************************ 00:07:43.849 12:40:09 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:44.107 Initializing NVMe Controllers 00:07:44.107 Attached to 0000:00:10.0 00:07:44.107 Namespace ID: 1 size: 6GB 00:07:44.107 Attached to 0000:00:11.0 00:07:44.107 Namespace ID: 1 size: 5GB 00:07:44.107 Attached to 0000:00:13.0 00:07:44.107 Namespace ID: 1 size: 1GB 00:07:44.107 Attached to 0000:00:12.0 00:07:44.107 Namespace ID: 1 size: 4GB 00:07:44.107 Namespace ID: 2 size: 4GB 00:07:44.107 Namespace ID: 3 size: 4GB 00:07:44.107 Initialization complete. 00:07:44.107 INFO: using host memory buffer for IO 00:07:44.107 Hello world! 00:07:44.107 INFO: using host memory buffer for IO 00:07:44.107 Hello world! 00:07:44.107 INFO: using host memory buffer for IO 00:07:44.107 Hello world! 00:07:44.107 INFO: using host memory buffer for IO 00:07:44.107 Hello world! 00:07:44.107 INFO: using host memory buffer for IO 00:07:44.107 Hello world! 00:07:44.107 INFO: using host memory buffer for IO 00:07:44.107 Hello world! 00:07:44.107 00:07:44.107 real 0m0.225s 00:07:44.107 user 0m0.077s 00:07:44.107 sys 0m0.101s 00:07:44.107 12:40:09 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.107 12:40:09 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:44.107 ************************************ 00:07:44.107 END TEST nvme_hello_world 00:07:44.107 ************************************ 00:07:44.107 12:40:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:44.107 12:40:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.107 12:40:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.107 12:40:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.107 ************************************ 00:07:44.107 START TEST nvme_sgl 00:07:44.107 ************************************ 00:07:44.107 12:40:09 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:44.364 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:44.364 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:44.365 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:44.365 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:44.365 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:44.365 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:44.365 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:44.365 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:44.365 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:44.365 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:44.365 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:44.365 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:44.365 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:44.365 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:44.365 NVMe Readv/Writev Request test 00:07:44.365 Attached to 0000:00:10.0 00:07:44.365 Attached to 0000:00:11.0 00:07:44.365 Attached to 0000:00:13.0 00:07:44.365 Attached to 0000:00:12.0 00:07:44.365 0000:00:10.0: build_io_request_2 test passed 00:07:44.365 0000:00:10.0: build_io_request_4 test passed 00:07:44.365 0000:00:10.0: build_io_request_5 test passed 00:07:44.365 0000:00:10.0: build_io_request_6 test passed 00:07:44.365 0000:00:10.0: build_io_request_7 test passed 00:07:44.365 0000:00:10.0: build_io_request_10 test passed 00:07:44.365 0000:00:11.0: build_io_request_2 test passed 00:07:44.365 0000:00:11.0: build_io_request_4 test passed 00:07:44.365 0000:00:11.0: build_io_request_5 test passed 00:07:44.365 0000:00:11.0: build_io_request_6 test passed 00:07:44.365 0000:00:11.0: build_io_request_7 test passed 00:07:44.365 0000:00:11.0: build_io_request_10 test passed 00:07:44.365 Cleaning up... 00:07:44.365 00:07:44.365 real 0m0.289s 00:07:44.365 user 0m0.145s 00:07:44.365 sys 0m0.094s 00:07:44.365 12:40:09 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.365 12:40:09 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:44.365 ************************************ 00:07:44.365 END TEST nvme_sgl 00:07:44.365 ************************************ 00:07:44.623 12:40:09 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:44.623 12:40:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.623 12:40:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.623 12:40:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 ************************************ 00:07:44.623 START TEST nvme_e2edp 00:07:44.623 ************************************ 00:07:44.623 12:40:09 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:44.623 NVMe Write/Read with End-to-End data protection test 00:07:44.623 Attached to 0000:00:10.0 00:07:44.623 Attached to 0000:00:11.0 00:07:44.623 Attached to 0000:00:13.0 00:07:44.623 Attached to 0000:00:12.0 00:07:44.623 Cleaning up... 00:07:44.623 00:07:44.623 real 0m0.213s 00:07:44.623 user 0m0.065s 00:07:44.623 sys 0m0.101s 00:07:44.623 12:40:10 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.623 12:40:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:44.623 ************************************ 00:07:44.623 END TEST nvme_e2edp 00:07:44.623 ************************************ 00:07:44.881 12:40:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:44.881 12:40:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.881 12:40:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.881 12:40:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.881 ************************************ 00:07:44.881 START TEST nvme_reserve 00:07:44.881 ************************************ 00:07:44.881 12:40:10 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:44.881 ===================================================== 00:07:44.881 NVMe Controller at PCI bus 0, device 16, function 0 00:07:44.881 ===================================================== 00:07:44.881 Reservations: Not Supported 00:07:44.881 ===================================================== 00:07:44.881 NVMe Controller at PCI bus 0, device 17, function 0 00:07:44.881 ===================================================== 00:07:44.881 Reservations: Not Supported 00:07:44.881 ===================================================== 00:07:44.881 NVMe Controller at PCI bus 0, device 19, function 0 00:07:44.881 ===================================================== 00:07:44.881 Reservations: Not Supported 00:07:44.881 ===================================================== 00:07:44.881 NVMe Controller at PCI bus 0, device 18, function 0 00:07:44.881 ===================================================== 00:07:44.881 Reservations: Not Supported 00:07:44.881 Reservation test passed 00:07:44.881 ************************************ 00:07:44.881 END TEST nvme_reserve 00:07:44.881 ************************************ 00:07:44.881 00:07:44.881 real 0m0.226s 00:07:44.881 user 0m0.083s 00:07:44.881 sys 0m0.087s 00:07:44.881 12:40:10 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.881 12:40:10 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:45.139 12:40:10 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:45.139 12:40:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.139 12:40:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.139 12:40:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.139 ************************************ 00:07:45.139 START TEST nvme_err_injection 00:07:45.139 ************************************ 00:07:45.139 12:40:10 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:45.139 NVMe Error Injection test 00:07:45.139 Attached to 0000:00:10.0 00:07:45.139 Attached to 0000:00:11.0 00:07:45.139 Attached to 0000:00:13.0 00:07:45.139 Attached to 0000:00:12.0 00:07:45.139 0000:00:13.0: get features failed as expected 00:07:45.139 0000:00:12.0: get features failed as expected 00:07:45.139 0000:00:10.0: get features failed as expected 00:07:45.139 0000:00:11.0: get features failed as expected 00:07:45.139 0000:00:10.0: get features successfully as expected 00:07:45.139 0000:00:11.0: get features successfully as expected 00:07:45.139 0000:00:13.0: get features successfully as expected 00:07:45.139 0000:00:12.0: get features successfully as expected 00:07:45.139 0000:00:10.0: read failed as expected 00:07:45.140 0000:00:11.0: read failed as expected 00:07:45.140 0000:00:13.0: read failed as expected 00:07:45.140 0000:00:12.0: read failed as expected 00:07:45.140 0000:00:10.0: read successfully as expected 00:07:45.140 0000:00:13.0: read successfully as expected 00:07:45.140 0000:00:11.0: read successfully as expected 00:07:45.140 0000:00:12.0: read successfully as expected 00:07:45.140 Cleaning up... 00:07:45.398 ************************************ 00:07:45.398 END TEST nvme_err_injection 00:07:45.398 ************************************ 00:07:45.398 00:07:45.398 real 0m0.225s 00:07:45.398 user 0m0.086s 00:07:45.398 sys 0m0.095s 00:07:45.398 12:40:10 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.398 12:40:10 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:45.398 12:40:10 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:45.398 12:40:10 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:45.398 12:40:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.398 12:40:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.398 ************************************ 00:07:45.398 START TEST nvme_overhead 00:07:45.398 ************************************ 00:07:45.398 12:40:10 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:46.772 Initializing NVMe Controllers 00:07:46.772 Attached to 0000:00:10.0 00:07:46.772 Attached to 0000:00:11.0 00:07:46.772 Attached to 0000:00:13.0 00:07:46.772 Attached to 0000:00:12.0 00:07:46.772 Initialization complete. Launching workers. 00:07:46.772 submit (in ns) avg, min, max = 12196.1, 9942.3, 258639.2 00:07:46.772 complete (in ns) avg, min, max = 8311.7, 7313.1, 531759.2 00:07:46.772 00:07:46.772 Submit histogram 00:07:46.772 ================ 00:07:46.772 Range in us Cumulative Count 00:07:46.772 9.895 - 9.945: 0.0059% ( 1) 00:07:46.772 10.043 - 10.092: 0.0117% ( 1) 00:07:46.772 10.338 - 10.388: 0.0176% ( 1) 00:07:46.772 10.683 - 10.732: 0.0234% ( 1) 00:07:46.772 10.732 - 10.782: 0.0352% ( 2) 00:07:46.772 10.782 - 10.831: 0.1231% ( 15) 00:07:46.772 10.831 - 10.880: 0.6798% ( 95) 00:07:46.772 10.880 - 10.929: 2.8655% ( 373) 00:07:46.772 10.929 - 10.978: 8.5497% ( 970) 00:07:46.772 10.978 - 11.028: 18.2596% ( 1657) 00:07:46.772 11.028 - 11.077: 30.9347% ( 2163) 00:07:46.772 11.077 - 11.126: 43.1351% ( 2082) 00:07:46.772 11.126 - 11.175: 52.7395% ( 1639) 00:07:46.772 11.175 - 11.225: 59.2265% ( 1107) 00:07:46.772 11.225 - 11.274: 62.9827% ( 641) 00:07:46.772 11.274 - 11.323: 65.0337% ( 350) 00:07:46.772 11.323 - 11.372: 66.3698% ( 228) 00:07:46.772 11.372 - 11.422: 67.2195% ( 145) 00:07:46.772 11.422 - 11.471: 67.8816% ( 113) 00:07:46.772 11.471 - 11.520: 68.3797% ( 85) 00:07:46.772 11.520 - 11.569: 68.8720% ( 84) 00:07:46.772 11.569 - 11.618: 69.4287% ( 95) 00:07:46.772 11.618 - 11.668: 69.9912% ( 96) 00:07:46.772 11.668 - 11.717: 70.5655% ( 98) 00:07:46.772 11.717 - 11.766: 71.1339% ( 97) 00:07:46.772 11.766 - 11.815: 71.7551% ( 106) 00:07:46.772 11.815 - 11.865: 72.3469% ( 101) 00:07:46.772 11.865 - 11.914: 73.0501% ( 120) 00:07:46.772 11.914 - 11.963: 73.8764% ( 141) 00:07:46.772 11.963 - 12.012: 74.6733% ( 136) 00:07:46.772 12.012 - 12.062: 75.4234% ( 128) 00:07:46.772 12.062 - 12.111: 76.0328% ( 104) 00:07:46.772 12.111 - 12.160: 76.6833% ( 111) 00:07:46.772 12.160 - 12.209: 77.3630% ( 116) 00:07:46.772 12.209 - 12.258: 77.8963% ( 91) 00:07:46.772 12.258 - 12.308: 78.3358% ( 75) 00:07:46.772 12.308 - 12.357: 78.7343% ( 68) 00:07:46.772 12.357 - 12.406: 78.9862% ( 43) 00:07:46.772 12.406 - 12.455: 79.2734% ( 49) 00:07:46.772 12.455 - 12.505: 79.4316% ( 27) 00:07:46.772 12.505 - 12.554: 79.5546% ( 21) 00:07:46.772 12.554 - 12.603: 79.6601% ( 18) 00:07:46.772 12.603 - 12.702: 79.7890% ( 22) 00:07:46.772 12.702 - 12.800: 79.9238% ( 23) 00:07:46.772 12.800 - 12.898: 79.9883% ( 11) 00:07:46.772 12.898 - 12.997: 80.0996% ( 19) 00:07:46.772 12.997 - 13.095: 80.2578% ( 27) 00:07:46.772 13.095 - 13.194: 80.3868% ( 22) 00:07:46.772 13.194 - 13.292: 80.4747% ( 15) 00:07:46.772 13.292 - 13.391: 80.5743% ( 17) 00:07:46.772 13.391 - 13.489: 80.6446% ( 12) 00:07:46.772 13.489 - 13.588: 80.6856% ( 7) 00:07:46.772 13.588 - 13.686: 80.7618% ( 13) 00:07:46.772 13.686 - 13.785: 80.8204% ( 10) 00:07:46.772 13.785 - 13.883: 80.9669% ( 25) 00:07:46.772 13.883 - 13.982: 81.0314% ( 11) 00:07:46.772 13.982 - 14.080: 81.1134% ( 14) 00:07:46.772 14.080 - 14.178: 81.2306% ( 20) 00:07:46.772 14.178 - 14.277: 81.3419% ( 19) 00:07:46.772 14.277 - 14.375: 81.4943% ( 26) 00:07:46.772 14.375 - 14.474: 81.7756% ( 48) 00:07:46.772 14.474 - 14.572: 82.2268% ( 77) 00:07:46.772 14.572 - 14.671: 83.4574% ( 210) 00:07:46.772 14.671 - 14.769: 85.6431% ( 373) 00:07:46.772 14.769 - 14.868: 88.0809% ( 416) 00:07:46.772 14.868 - 14.966: 90.3897% ( 394) 00:07:46.772 14.966 - 15.065: 91.8664% ( 252) 00:07:46.773 15.065 - 15.163: 92.6165% ( 128) 00:07:46.773 15.163 - 15.262: 93.1204% ( 86) 00:07:46.773 15.262 - 15.360: 93.5013% ( 65) 00:07:46.773 15.360 - 15.458: 94.0170% ( 88) 00:07:46.773 15.458 - 15.557: 94.7260% ( 121) 00:07:46.773 15.557 - 15.655: 95.6812% ( 163) 00:07:46.773 15.655 - 15.754: 96.4196% ( 126) 00:07:46.773 15.754 - 15.852: 96.9001% ( 82) 00:07:46.773 15.852 - 15.951: 97.2048% ( 52) 00:07:46.773 15.951 - 16.049: 97.3747% ( 29) 00:07:46.773 16.049 - 16.148: 97.5095% ( 23) 00:07:46.773 16.148 - 16.246: 97.5505% ( 7) 00:07:46.773 16.246 - 16.345: 97.6091% ( 10) 00:07:46.773 16.345 - 16.443: 97.6502% ( 7) 00:07:46.773 16.443 - 16.542: 97.6912% ( 7) 00:07:46.773 16.542 - 16.640: 97.7146% ( 4) 00:07:46.773 16.640 - 16.738: 97.7322% ( 3) 00:07:46.773 16.738 - 16.837: 97.7615% ( 5) 00:07:46.773 16.837 - 16.935: 97.7849% ( 4) 00:07:46.773 16.935 - 17.034: 97.7908% ( 1) 00:07:46.773 17.034 - 17.132: 97.8201% ( 5) 00:07:46.773 17.132 - 17.231: 97.8670% ( 8) 00:07:46.773 17.231 - 17.329: 97.9373% ( 12) 00:07:46.773 17.329 - 17.428: 97.9607% ( 4) 00:07:46.773 17.428 - 17.526: 98.0018% ( 7) 00:07:46.773 17.526 - 17.625: 98.0545% ( 9) 00:07:46.773 17.625 - 17.723: 98.1072% ( 9) 00:07:46.773 17.723 - 17.822: 98.1717% ( 11) 00:07:46.773 17.822 - 17.920: 98.2127% ( 7) 00:07:46.773 17.920 - 18.018: 98.2479% ( 6) 00:07:46.773 18.018 - 18.117: 98.2713% ( 4) 00:07:46.773 18.117 - 18.215: 98.3182% ( 8) 00:07:46.773 18.215 - 18.314: 98.3416% ( 4) 00:07:46.773 18.314 - 18.412: 98.4120% ( 12) 00:07:46.773 18.412 - 18.511: 98.4471% ( 6) 00:07:46.773 18.511 - 18.609: 98.4823% ( 6) 00:07:46.773 18.609 - 18.708: 98.4999% ( 3) 00:07:46.773 18.708 - 18.806: 98.5292% ( 5) 00:07:46.773 18.806 - 18.905: 98.5643% ( 6) 00:07:46.773 18.905 - 19.003: 98.5702% ( 1) 00:07:46.773 19.003 - 19.102: 98.5995% ( 5) 00:07:46.773 19.102 - 19.200: 98.6229% ( 4) 00:07:46.773 19.200 - 19.298: 98.6405% ( 3) 00:07:46.773 19.298 - 19.397: 98.6757% ( 6) 00:07:46.773 19.397 - 19.495: 98.6991% ( 4) 00:07:46.773 19.594 - 19.692: 98.7167% ( 3) 00:07:46.773 19.692 - 19.791: 98.7343% ( 3) 00:07:46.773 19.791 - 19.889: 98.7577% ( 4) 00:07:46.773 19.889 - 19.988: 98.7636% ( 1) 00:07:46.773 19.988 - 20.086: 98.7929% ( 5) 00:07:46.773 20.086 - 20.185: 98.8046% ( 2) 00:07:46.773 20.185 - 20.283: 98.8163% ( 2) 00:07:46.773 20.283 - 20.382: 98.8280% ( 2) 00:07:46.773 20.382 - 20.480: 98.8339% ( 1) 00:07:46.773 20.480 - 20.578: 98.8456% ( 2) 00:07:46.773 20.578 - 20.677: 98.8632% ( 3) 00:07:46.773 20.677 - 20.775: 98.8690% ( 1) 00:07:46.773 20.775 - 20.874: 98.8925% ( 4) 00:07:46.773 20.874 - 20.972: 98.8983% ( 1) 00:07:46.773 20.972 - 21.071: 98.9042% ( 1) 00:07:46.773 21.071 - 21.169: 98.9159% ( 2) 00:07:46.773 21.268 - 21.366: 98.9218% ( 1) 00:07:46.773 21.465 - 21.563: 98.9335% ( 2) 00:07:46.773 21.563 - 21.662: 98.9452% ( 2) 00:07:46.773 21.662 - 21.760: 98.9511% ( 1) 00:07:46.773 21.760 - 21.858: 98.9686% ( 3) 00:07:46.773 21.858 - 21.957: 98.9745% ( 1) 00:07:46.773 21.957 - 22.055: 98.9862% ( 2) 00:07:46.773 22.055 - 22.154: 98.9979% ( 2) 00:07:46.773 22.154 - 22.252: 99.0155% ( 3) 00:07:46.773 22.252 - 22.351: 99.0272% ( 2) 00:07:46.773 22.351 - 22.449: 99.0448% ( 3) 00:07:46.773 22.449 - 22.548: 99.0565% ( 2) 00:07:46.773 22.548 - 22.646: 99.0683% ( 2) 00:07:46.773 22.646 - 22.745: 99.0741% ( 1) 00:07:46.773 22.745 - 22.843: 99.0917% ( 3) 00:07:46.773 22.843 - 22.942: 99.1093% ( 3) 00:07:46.773 22.942 - 23.040: 99.1444% ( 6) 00:07:46.773 23.040 - 23.138: 99.1562% ( 2) 00:07:46.773 23.335 - 23.434: 99.1679% ( 2) 00:07:46.773 23.434 - 23.532: 99.1737% ( 1) 00:07:46.773 23.532 - 23.631: 99.1913% ( 3) 00:07:46.773 23.631 - 23.729: 99.1972% ( 1) 00:07:46.773 23.729 - 23.828: 99.2323% ( 6) 00:07:46.773 23.828 - 23.926: 99.2558% ( 4) 00:07:46.773 23.926 - 24.025: 99.2734% ( 3) 00:07:46.773 24.025 - 24.123: 99.2909% ( 3) 00:07:46.773 24.123 - 24.222: 99.3085% ( 3) 00:07:46.773 24.222 - 24.320: 99.3144% ( 1) 00:07:46.773 24.320 - 24.418: 99.3261% ( 2) 00:07:46.773 24.418 - 24.517: 99.3437% ( 3) 00:07:46.773 24.615 - 24.714: 99.3495% ( 1) 00:07:46.773 24.911 - 25.009: 99.3554% ( 1) 00:07:46.773 25.009 - 25.108: 99.3730% ( 3) 00:07:46.773 25.206 - 25.403: 99.3788% ( 1) 00:07:46.773 25.403 - 25.600: 99.3847% ( 1) 00:07:46.773 25.600 - 25.797: 99.3906% ( 1) 00:07:46.773 25.797 - 25.994: 99.3964% ( 1) 00:07:46.773 27.175 - 27.372: 99.4023% ( 1) 00:07:46.773 27.372 - 27.569: 99.4081% ( 1) 00:07:46.773 27.569 - 27.766: 99.4140% ( 1) 00:07:46.773 27.963 - 28.160: 99.4199% ( 1) 00:07:46.773 28.160 - 28.357: 99.4316% ( 2) 00:07:46.773 28.751 - 28.948: 99.4433% ( 2) 00:07:46.773 28.948 - 29.145: 99.4492% ( 1) 00:07:46.773 29.342 - 29.538: 99.4609% ( 2) 00:07:46.773 29.538 - 29.735: 99.4667% ( 1) 00:07:46.773 29.932 - 30.129: 99.4726% ( 1) 00:07:46.773 30.917 - 31.114: 99.5019% ( 5) 00:07:46.773 31.114 - 31.311: 99.5078% ( 1) 00:07:46.773 31.311 - 31.508: 99.5253% ( 3) 00:07:46.773 31.508 - 31.705: 99.5546% ( 5) 00:07:46.773 31.705 - 31.902: 99.6191% ( 11) 00:07:46.773 31.902 - 32.098: 99.6718% ( 9) 00:07:46.773 32.098 - 32.295: 99.7129% ( 7) 00:07:46.773 32.295 - 32.492: 99.7363% ( 4) 00:07:46.773 32.492 - 32.689: 99.7597% ( 4) 00:07:46.773 32.689 - 32.886: 99.7773% ( 3) 00:07:46.773 32.886 - 33.083: 99.7890% ( 2) 00:07:46.773 33.280 - 33.477: 99.7949% ( 1) 00:07:46.773 33.477 - 33.674: 99.8008% ( 1) 00:07:46.773 33.674 - 33.871: 99.8066% ( 1) 00:07:46.773 33.871 - 34.068: 99.8125% ( 1) 00:07:46.773 34.068 - 34.265: 99.8183% ( 1) 00:07:46.773 34.265 - 34.462: 99.8242% ( 1) 00:07:46.773 35.052 - 35.249: 99.8301% ( 1) 00:07:46.773 35.446 - 35.643: 99.8359% ( 1) 00:07:46.773 37.809 - 38.006: 99.8418% ( 1) 00:07:46.773 39.188 - 39.385: 99.8476% ( 1) 00:07:46.773 39.582 - 39.778: 99.8535% ( 1) 00:07:46.773 39.975 - 40.172: 99.8594% ( 1) 00:07:46.773 40.566 - 40.763: 99.8652% ( 1) 00:07:46.773 41.551 - 41.748: 99.8769% ( 2) 00:07:46.773 42.142 - 42.338: 99.8828% ( 1) 00:07:46.774 42.338 - 42.535: 99.8887% ( 1) 00:07:46.774 43.520 - 43.717: 99.8945% ( 1) 00:07:46.774 44.111 - 44.308: 99.9004% ( 1) 00:07:46.774 44.898 - 45.095: 99.9062% ( 1) 00:07:46.774 45.292 - 45.489: 99.9180% ( 2) 00:07:46.774 46.671 - 46.868: 99.9238% ( 1) 00:07:46.774 47.262 - 47.458: 99.9297% ( 1) 00:07:46.774 48.640 - 48.837: 99.9355% ( 1) 00:07:46.774 49.428 - 49.625: 99.9473% ( 2) 00:07:46.774 50.018 - 50.215: 99.9531% ( 1) 00:07:46.774 51.988 - 52.382: 99.9590% ( 1) 00:07:46.774 52.775 - 53.169: 99.9648% ( 1) 00:07:46.774 53.169 - 53.563: 99.9707% ( 1) 00:07:46.774 54.351 - 54.745: 99.9824% ( 2) 00:07:46.774 54.745 - 55.138: 99.9883% ( 1) 00:07:46.774 67.742 - 68.135: 99.9941% ( 1) 00:07:46.774 258.363 - 259.938: 100.0000% ( 1) 00:07:46.774 00:07:46.774 Complete histogram 00:07:46.774 ================== 00:07:46.774 Range in us Cumulative Count 00:07:46.774 7.286 - 7.335: 0.0645% ( 11) 00:07:46.774 7.335 - 7.385: 1.3595% ( 221) 00:07:46.774 7.385 - 7.434: 8.5321% ( 1224) 00:07:46.774 7.434 - 7.483: 24.6645% ( 2753) 00:07:46.774 7.483 - 7.532: 44.8403% ( 3443) 00:07:46.774 7.532 - 7.582: 60.5333% ( 2678) 00:07:46.774 7.582 - 7.631: 69.6806% ( 1561) 00:07:46.774 7.631 - 7.680: 74.3393% ( 795) 00:07:46.774 7.680 - 7.729: 76.9294% ( 442) 00:07:46.774 7.729 - 7.778: 78.1483% ( 208) 00:07:46.774 7.778 - 7.828: 78.9218% ( 132) 00:07:46.774 7.828 - 7.877: 79.2499% ( 56) 00:07:46.774 7.877 - 7.926: 79.4199% ( 29) 00:07:46.774 7.926 - 7.975: 79.6132% ( 33) 00:07:46.774 7.975 - 8.025: 79.8008% ( 32) 00:07:46.774 8.025 - 8.074: 80.0059% ( 35) 00:07:46.774 8.074 - 8.123: 80.1055% ( 17) 00:07:46.774 8.123 - 8.172: 80.1699% ( 11) 00:07:46.774 8.172 - 8.222: 80.2461% ( 13) 00:07:46.774 8.222 - 8.271: 80.3164% ( 12) 00:07:46.774 8.271 - 8.320: 80.4102% ( 16) 00:07:46.774 8.320 - 8.369: 80.5157% ( 18) 00:07:46.774 8.369 - 8.418: 80.5801% ( 11) 00:07:46.774 8.418 - 8.468: 80.6036% ( 4) 00:07:46.774 8.468 - 8.517: 80.6505% ( 8) 00:07:46.774 8.517 - 8.566: 80.6856% ( 6) 00:07:46.774 8.566 - 8.615: 80.7677% ( 14) 00:07:46.774 8.615 - 8.665: 80.8145% ( 8) 00:07:46.774 8.665 - 8.714: 80.8497% ( 6) 00:07:46.774 8.714 - 8.763: 80.8849% ( 6) 00:07:46.774 8.763 - 8.812: 80.9083% ( 4) 00:07:46.774 8.812 - 8.862: 80.9200% ( 2) 00:07:46.774 8.862 - 8.911: 80.9376% ( 3) 00:07:46.774 8.911 - 8.960: 80.9435% ( 1) 00:07:46.774 8.960 - 9.009: 80.9610% ( 3) 00:07:46.774 9.009 - 9.058: 80.9728% ( 2) 00:07:46.774 9.058 - 9.108: 80.9786% ( 1) 00:07:46.774 9.157 - 9.206: 80.9903% ( 2) 00:07:46.774 9.206 - 9.255: 81.0021% ( 2) 00:07:46.774 9.255 - 9.305: 81.0079% ( 1) 00:07:46.774 9.305 - 9.354: 81.0196% ( 2) 00:07:46.774 9.354 - 9.403: 81.0431% ( 4) 00:07:46.774 9.403 - 9.452: 81.0958% ( 9) 00:07:46.774 9.452 - 9.502: 81.1778% ( 14) 00:07:46.774 9.502 - 9.551: 81.2599% ( 14) 00:07:46.774 9.551 - 9.600: 81.3829% ( 21) 00:07:46.774 9.600 - 9.649: 81.4650% ( 14) 00:07:46.774 9.649 - 9.698: 81.5470% ( 14) 00:07:46.774 9.698 - 9.748: 81.6232% ( 13) 00:07:46.774 9.748 - 9.797: 81.6642% ( 7) 00:07:46.774 9.797 - 9.846: 81.7697% ( 18) 00:07:46.774 9.846 - 9.895: 81.8459% ( 13) 00:07:46.774 9.895 - 9.945: 82.0627% ( 37) 00:07:46.774 9.945 - 9.994: 82.2737% ( 36) 00:07:46.774 9.994 - 10.043: 82.6135% ( 58) 00:07:46.774 10.043 - 10.092: 83.3285% ( 122) 00:07:46.774 10.092 - 10.142: 84.4887% ( 198) 00:07:46.774 10.142 - 10.191: 86.0006% ( 258) 00:07:46.774 10.191 - 10.240: 87.9754% ( 337) 00:07:46.774 10.240 - 10.289: 90.1377% ( 369) 00:07:46.774 10.289 - 10.338: 92.0949% ( 334) 00:07:46.774 10.338 - 10.388: 93.6830% ( 271) 00:07:46.774 10.388 - 10.437: 94.9136% ( 210) 00:07:46.774 10.437 - 10.486: 95.7222% ( 138) 00:07:46.774 10.486 - 10.535: 96.3844% ( 113) 00:07:46.774 10.535 - 10.585: 96.7126% ( 56) 00:07:46.774 10.585 - 10.634: 96.9587% ( 42) 00:07:46.774 10.634 - 10.683: 97.0935% ( 23) 00:07:46.774 10.683 - 10.732: 97.1931% ( 17) 00:07:46.774 10.732 - 10.782: 97.2634% ( 12) 00:07:46.774 10.782 - 10.831: 97.2986% ( 6) 00:07:46.774 10.831 - 10.880: 97.3279% ( 5) 00:07:46.774 10.880 - 10.929: 97.3572% ( 5) 00:07:46.774 10.929 - 10.978: 97.4040% ( 8) 00:07:46.774 10.978 - 11.028: 97.4392% ( 6) 00:07:46.774 11.028 - 11.077: 97.4861% ( 8) 00:07:46.774 11.077 - 11.126: 97.5154% ( 5) 00:07:46.774 11.126 - 11.175: 97.5505% ( 6) 00:07:46.774 11.225 - 11.274: 97.5623% ( 2) 00:07:46.774 11.274 - 11.323: 97.5916% ( 5) 00:07:46.774 11.323 - 11.372: 97.6091% ( 3) 00:07:46.774 11.422 - 11.471: 97.6209% ( 2) 00:07:46.774 11.520 - 11.569: 97.6267% ( 1) 00:07:46.774 11.569 - 11.618: 97.6326% ( 1) 00:07:46.774 11.618 - 11.668: 97.6443% ( 2) 00:07:46.774 11.668 - 11.717: 97.6619% ( 3) 00:07:46.774 11.717 - 11.766: 97.6677% ( 1) 00:07:46.774 11.766 - 11.815: 97.6736% ( 1) 00:07:46.774 11.815 - 11.865: 97.6853% ( 2) 00:07:46.774 11.865 - 11.914: 97.7029% ( 3) 00:07:46.774 11.963 - 12.012: 97.7205% ( 3) 00:07:46.774 12.012 - 12.062: 97.7263% ( 1) 00:07:46.774 12.062 - 12.111: 97.7439% ( 3) 00:07:46.774 12.111 - 12.160: 97.7498% ( 1) 00:07:46.774 12.603 - 12.702: 97.7615% ( 2) 00:07:46.774 12.702 - 12.800: 97.7674% ( 1) 00:07:46.774 12.800 - 12.898: 97.7732% ( 1) 00:07:46.774 12.997 - 13.095: 97.7791% ( 1) 00:07:46.774 13.292 - 13.391: 97.7849% ( 1) 00:07:46.774 13.391 - 13.489: 97.7967% ( 2) 00:07:46.774 13.489 - 13.588: 97.8142% ( 3) 00:07:46.774 13.588 - 13.686: 97.8260% ( 2) 00:07:46.774 13.686 - 13.785: 97.8904% ( 11) 00:07:46.774 13.785 - 13.883: 97.9549% ( 11) 00:07:46.774 13.883 - 13.982: 98.0311% ( 13) 00:07:46.774 13.982 - 14.080: 98.1014% ( 12) 00:07:46.774 14.080 - 14.178: 98.1658% ( 11) 00:07:46.774 14.178 - 14.277: 98.2010% ( 6) 00:07:46.774 14.277 - 14.375: 98.2655% ( 11) 00:07:46.774 14.375 - 14.474: 98.3241% ( 10) 00:07:46.774 14.474 - 14.572: 98.3709% ( 8) 00:07:46.774 14.572 - 14.671: 98.4413% ( 12) 00:07:46.774 14.671 - 14.769: 98.4940% ( 9) 00:07:46.774 14.769 - 14.868: 98.5233% ( 5) 00:07:46.774 14.868 - 14.966: 98.5585% ( 6) 00:07:46.774 14.966 - 15.065: 98.6053% ( 8) 00:07:46.774 15.065 - 15.163: 98.6229% ( 3) 00:07:46.775 15.163 - 15.262: 98.6464% ( 4) 00:07:46.775 15.262 - 15.360: 98.6522% ( 1) 00:07:46.775 15.360 - 15.458: 98.6815% ( 5) 00:07:46.775 15.458 - 15.557: 98.6932% ( 2) 00:07:46.775 15.557 - 15.655: 98.7108% ( 3) 00:07:46.775 15.655 - 15.754: 98.7284% ( 3) 00:07:46.775 15.754 - 15.852: 98.7518% ( 4) 00:07:46.775 15.852 - 15.951: 98.7694% ( 3) 00:07:46.775 15.951 - 16.049: 98.7753% ( 1) 00:07:46.775 16.049 - 16.148: 98.8046% ( 5) 00:07:46.775 16.148 - 16.246: 98.8163% ( 2) 00:07:46.775 16.246 - 16.345: 98.8339% ( 3) 00:07:46.775 16.345 - 16.443: 98.8573% ( 4) 00:07:46.775 16.443 - 16.542: 98.8632% ( 1) 00:07:46.775 16.542 - 16.640: 98.8749% ( 2) 00:07:46.775 16.640 - 16.738: 98.8925% ( 3) 00:07:46.775 16.738 - 16.837: 98.8983% ( 1) 00:07:46.775 16.935 - 17.034: 98.9100% ( 2) 00:07:46.775 17.034 - 17.132: 98.9159% ( 1) 00:07:46.775 17.132 - 17.231: 98.9218% ( 1) 00:07:46.775 17.329 - 17.428: 98.9393% ( 3) 00:07:46.775 17.428 - 17.526: 98.9452% ( 1) 00:07:46.775 17.526 - 17.625: 98.9569% ( 2) 00:07:46.775 17.625 - 17.723: 98.9804% ( 4) 00:07:46.775 17.723 - 17.822: 98.9862% ( 1) 00:07:46.775 17.822 - 17.920: 98.9921% ( 1) 00:07:46.775 17.920 - 18.018: 99.0097% ( 3) 00:07:46.775 18.018 - 18.117: 99.0448% ( 6) 00:07:46.775 18.117 - 18.215: 99.0565% ( 2) 00:07:46.775 18.314 - 18.412: 99.0741% ( 3) 00:07:46.775 18.412 - 18.511: 99.1034% ( 5) 00:07:46.775 18.511 - 18.609: 99.1151% ( 2) 00:07:46.775 18.609 - 18.708: 99.1327% ( 3) 00:07:46.775 18.708 - 18.806: 99.1503% ( 3) 00:07:46.775 18.806 - 18.905: 99.1620% ( 2) 00:07:46.775 18.905 - 19.003: 99.1913% ( 5) 00:07:46.775 19.003 - 19.102: 99.2030% ( 2) 00:07:46.775 19.102 - 19.200: 99.2265% ( 4) 00:07:46.775 19.200 - 19.298: 99.2382% ( 2) 00:07:46.775 19.298 - 19.397: 99.2499% ( 2) 00:07:46.775 19.397 - 19.495: 99.2675% ( 3) 00:07:46.775 19.495 - 19.594: 99.2792% ( 2) 00:07:46.775 19.594 - 19.692: 99.2851% ( 1) 00:07:46.775 19.692 - 19.791: 99.2909% ( 1) 00:07:46.775 19.889 - 19.988: 99.3085% ( 3) 00:07:46.775 19.988 - 20.086: 99.3202% ( 2) 00:07:46.775 20.185 - 20.283: 99.3320% ( 2) 00:07:46.775 20.283 - 20.382: 99.3495% ( 3) 00:07:46.775 20.677 - 20.775: 99.3554% ( 1) 00:07:46.775 20.874 - 20.972: 99.3613% ( 1) 00:07:46.775 20.972 - 21.071: 99.3671% ( 1) 00:07:46.775 21.268 - 21.366: 99.3730% ( 1) 00:07:46.775 21.366 - 21.465: 99.3788% ( 1) 00:07:46.775 21.563 - 21.662: 99.3847% ( 1) 00:07:46.775 22.055 - 22.154: 99.3964% ( 2) 00:07:46.775 22.154 - 22.252: 99.4492% ( 9) 00:07:46.775 22.252 - 22.351: 99.5019% ( 9) 00:07:46.775 22.351 - 22.449: 99.5488% ( 8) 00:07:46.775 22.449 - 22.548: 99.6015% ( 9) 00:07:46.775 22.548 - 22.646: 99.6367% ( 6) 00:07:46.775 22.646 - 22.745: 99.6543% ( 3) 00:07:46.775 22.745 - 22.843: 99.6660% ( 2) 00:07:46.775 22.942 - 23.040: 99.6718% ( 1) 00:07:46.775 23.237 - 23.335: 99.6777% ( 1) 00:07:46.775 23.335 - 23.434: 99.6836% ( 1) 00:07:46.775 23.434 - 23.532: 99.6894% ( 1) 00:07:46.775 23.729 - 23.828: 99.6953% ( 1) 00:07:46.775 23.926 - 24.025: 99.7187% ( 4) 00:07:46.775 24.025 - 24.123: 99.7246% ( 1) 00:07:46.775 24.123 - 24.222: 99.7304% ( 1) 00:07:46.775 24.320 - 24.418: 99.7480% ( 3) 00:07:46.775 24.911 - 25.009: 99.7539% ( 1) 00:07:46.775 25.108 - 25.206: 99.7597% ( 1) 00:07:46.775 25.206 - 25.403: 99.7656% ( 1) 00:07:46.775 26.585 - 26.782: 99.7715% ( 1) 00:07:46.775 26.978 - 27.175: 99.7773% ( 1) 00:07:46.775 27.766 - 27.963: 99.7890% ( 2) 00:07:46.775 28.751 - 28.948: 99.7949% ( 1) 00:07:46.775 29.538 - 29.735: 99.8008% ( 1) 00:07:46.775 29.735 - 29.932: 99.8066% ( 1) 00:07:46.775 29.932 - 30.129: 99.8125% ( 1) 00:07:46.775 30.326 - 30.523: 99.8183% ( 1) 00:07:46.775 31.114 - 31.311: 99.8242% ( 1) 00:07:46.775 32.098 - 32.295: 99.8301% ( 1) 00:07:46.775 32.689 - 32.886: 99.8359% ( 1) 00:07:46.775 33.477 - 33.674: 99.8476% ( 2) 00:07:46.775 33.674 - 33.871: 99.8594% ( 2) 00:07:46.775 34.658 - 34.855: 99.8652% ( 1) 00:07:46.775 34.855 - 35.052: 99.8769% ( 2) 00:07:46.775 43.717 - 43.914: 99.8828% ( 1) 00:07:46.775 45.883 - 46.080: 99.8887% ( 1) 00:07:46.775 46.474 - 46.671: 99.8945% ( 1) 00:07:46.775 47.262 - 47.458: 99.9004% ( 1) 00:07:46.775 48.049 - 48.246: 99.9062% ( 1) 00:07:46.775 48.837 - 49.034: 99.9121% ( 1) 00:07:46.775 49.034 - 49.231: 99.9180% ( 1) 00:07:46.775 49.231 - 49.428: 99.9238% ( 1) 00:07:46.775 50.806 - 51.200: 99.9297% ( 1) 00:07:46.775 53.169 - 53.563: 99.9355% ( 1) 00:07:46.775 53.957 - 54.351: 99.9414% ( 1) 00:07:46.775 54.745 - 55.138: 99.9473% ( 1) 00:07:46.775 58.683 - 59.077: 99.9531% ( 1) 00:07:46.775 59.077 - 59.471: 99.9648% ( 2) 00:07:46.775 63.409 - 63.803: 99.9707% ( 1) 00:07:46.775 65.772 - 66.166: 99.9766% ( 1) 00:07:46.775 69.317 - 69.711: 99.9824% ( 1) 00:07:46.775 71.680 - 72.074: 99.9883% ( 1) 00:07:46.775 78.375 - 78.769: 99.9941% ( 1) 00:07:46.775 529.329 - 532.480: 100.0000% ( 1) 00:07:46.775 00:07:46.775 00:07:46.775 real 0m1.230s 00:07:46.775 user 0m1.074s 00:07:46.775 sys 0m0.101s 00:07:46.775 12:40:11 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.775 12:40:11 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:46.775 ************************************ 00:07:46.775 END TEST nvme_overhead 00:07:46.775 ************************************ 00:07:46.775 12:40:11 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:46.775 12:40:11 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:46.775 12:40:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.775 12:40:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.775 ************************************ 00:07:46.775 START TEST nvme_arbitration 00:07:46.775 ************************************ 00:07:46.775 12:40:11 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:50.107 Initializing NVMe Controllers 00:07:50.107 Attached to 0000:00:10.0 00:07:50.107 Attached to 0000:00:11.0 00:07:50.107 Attached to 0000:00:13.0 00:07:50.107 Attached to 0000:00:12.0 00:07:50.107 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:50.107 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:50.107 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:50.107 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:50.107 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:50.107 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:50.107 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:50.107 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:50.107 Initialization complete. Launching workers. 00:07:50.107 Starting thread on core 1 with urgent priority queue 00:07:50.107 Starting thread on core 2 with urgent priority queue 00:07:50.107 Starting thread on core 3 with urgent priority queue 00:07:50.107 Starting thread on core 0 with urgent priority queue 00:07:50.107 QEMU NVMe Ctrl (12340 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:07:50.107 QEMU NVMe Ctrl (12342 ) core 0: 853.33 IO/s 117.19 secs/100000 ios 00:07:50.107 QEMU NVMe Ctrl (12341 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:50.107 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:50.107 QEMU NVMe Ctrl (12343 ) core 2: 810.67 IO/s 123.36 secs/100000 ios 00:07:50.107 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:07:50.107 ======================================================== 00:07:50.107 00:07:50.107 00:07:50.107 real 0m3.315s 00:07:50.107 user 0m9.285s 00:07:50.107 sys 0m0.107s 00:07:50.107 12:40:15 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.107 12:40:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:50.107 ************************************ 00:07:50.107 END TEST nvme_arbitration 00:07:50.107 ************************************ 00:07:50.107 12:40:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:50.107 12:40:15 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.107 12:40:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.107 12:40:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.107 ************************************ 00:07:50.107 START TEST nvme_single_aen 00:07:50.107 ************************************ 00:07:50.107 12:40:15 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:50.107 Asynchronous Event Request test 00:07:50.107 Attached to 0000:00:10.0 00:07:50.107 Attached to 0000:00:11.0 00:07:50.107 Attached to 0000:00:13.0 00:07:50.107 Attached to 0000:00:12.0 00:07:50.107 Reset controller to setup AER completions for this process 00:07:50.107 Registering asynchronous event callbacks... 00:07:50.107 Getting orig temperature thresholds of all controllers 00:07:50.107 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.107 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.107 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.107 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:50.107 Setting all controllers temperature threshold low to trigger AER 00:07:50.107 Waiting for all controllers temperature threshold to be set lower 00:07:50.107 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.107 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:50.107 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.107 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:50.107 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.107 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:50.107 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:50.107 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:50.107 Waiting for all controllers to trigger AER and reset threshold 00:07:50.107 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.107 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.107 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.107 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.107 Cleaning up... 00:07:50.107 00:07:50.107 real 0m0.218s 00:07:50.107 user 0m0.088s 00:07:50.107 sys 0m0.089s 00:07:50.107 12:40:15 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.107 12:40:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:50.107 ************************************ 00:07:50.107 END TEST nvme_single_aen 00:07:50.107 ************************************ 00:07:50.107 12:40:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:50.107 12:40:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.107 12:40:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.107 12:40:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.107 ************************************ 00:07:50.107 START TEST nvme_doorbell_aers 00:07:50.107 ************************************ 00:07:50.107 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:50.107 12:40:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:50.107 12:40:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:50.107 12:40:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:50.108 12:40:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:50.108 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:50.108 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:50.108 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:50.108 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:50.108 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:50.369 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:50.369 12:40:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:50.369 12:40:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:50.369 12:40:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:50.369 [2024-11-20 12:40:15.838113] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:00.362 Executing: test_write_invalid_db 00:08:00.362 Waiting for AER completion... 00:08:00.362 Failure: test_write_invalid_db 00:08:00.362 00:08:00.362 Executing: test_invalid_db_write_overflow_sq 00:08:00.362 Waiting for AER completion... 00:08:00.362 Failure: test_invalid_db_write_overflow_sq 00:08:00.362 00:08:00.362 Executing: test_invalid_db_write_overflow_cq 00:08:00.362 Waiting for AER completion... 00:08:00.362 Failure: test_invalid_db_write_overflow_cq 00:08:00.362 00:08:00.362 12:40:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:00.362 12:40:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:00.362 [2024-11-20 12:40:25.867516] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:10.347 Executing: test_write_invalid_db 00:08:10.347 Waiting for AER completion... 00:08:10.347 Failure: test_write_invalid_db 00:08:10.347 00:08:10.347 Executing: test_invalid_db_write_overflow_sq 00:08:10.347 Waiting for AER completion... 00:08:10.347 Failure: test_invalid_db_write_overflow_sq 00:08:10.347 00:08:10.347 Executing: test_invalid_db_write_overflow_cq 00:08:10.347 Waiting for AER completion... 00:08:10.347 Failure: test_invalid_db_write_overflow_cq 00:08:10.347 00:08:10.347 12:40:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.347 12:40:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:10.607 [2024-11-20 12:40:35.900802] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:20.583 Executing: test_write_invalid_db 00:08:20.583 Waiting for AER completion... 00:08:20.583 Failure: test_write_invalid_db 00:08:20.583 00:08:20.583 Executing: test_invalid_db_write_overflow_sq 00:08:20.583 Waiting for AER completion... 00:08:20.583 Failure: test_invalid_db_write_overflow_sq 00:08:20.583 00:08:20.583 Executing: test_invalid_db_write_overflow_cq 00:08:20.583 Waiting for AER completion... 00:08:20.583 Failure: test_invalid_db_write_overflow_cq 00:08:20.583 00:08:20.583 12:40:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:20.583 12:40:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:20.583 [2024-11-20 12:40:45.932376] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 Executing: test_write_invalid_db 00:08:30.553 Waiting for AER completion... 00:08:30.553 Failure: test_write_invalid_db 00:08:30.553 00:08:30.553 Executing: test_invalid_db_write_overflow_sq 00:08:30.553 Waiting for AER completion... 00:08:30.553 Failure: test_invalid_db_write_overflow_sq 00:08:30.553 00:08:30.553 Executing: test_invalid_db_write_overflow_cq 00:08:30.553 Waiting for AER completion... 00:08:30.553 Failure: test_invalid_db_write_overflow_cq 00:08:30.553 00:08:30.553 00:08:30.553 real 0m40.187s 00:08:30.553 user 0m34.270s 00:08:30.553 sys 0m5.536s 00:08:30.553 12:40:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.553 12:40:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:30.553 ************************************ 00:08:30.553 END TEST nvme_doorbell_aers 00:08:30.553 ************************************ 00:08:30.553 12:40:55 nvme -- nvme/nvme.sh@97 -- # uname 00:08:30.553 12:40:55 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:30.553 12:40:55 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:30.553 12:40:55 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:30.553 12:40:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.553 12:40:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.553 ************************************ 00:08:30.553 START TEST nvme_multi_aen 00:08:30.553 ************************************ 00:08:30.553 12:40:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:30.553 [2024-11-20 12:40:55.975918] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.976135] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.976193] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.977445] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.977572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.977630] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.978501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.978591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.978645] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.979514] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.979603] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 [2024-11-20 12:40:55.979653] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63359) is not found. Dropping the request. 00:08:30.553 Child process pid: 63885 00:08:30.813 [Child] Asynchronous Event Request test 00:08:30.813 [Child] Attached to 0000:00:10.0 00:08:30.813 [Child] Attached to 0000:00:11.0 00:08:30.813 [Child] Attached to 0000:00:13.0 00:08:30.813 [Child] Attached to 0000:00:12.0 00:08:30.813 [Child] Registering asynchronous event callbacks... 00:08:30.813 [Child] Getting orig temperature thresholds of all controllers 00:08:30.813 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:30.813 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 [Child] Cleaning up... 00:08:30.813 Asynchronous Event Request test 00:08:30.813 Attached to 0000:00:10.0 00:08:30.813 Attached to 0000:00:11.0 00:08:30.813 Attached to 0000:00:13.0 00:08:30.813 Attached to 0000:00:12.0 00:08:30.813 Reset controller to setup AER completions for this process 00:08:30.813 Registering asynchronous event callbacks... 00:08:30.813 Getting orig temperature thresholds of all controllers 00:08:30.813 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.813 Setting all controllers temperature threshold low to trigger AER 00:08:30.813 Waiting for all controllers temperature threshold to be set lower 00:08:30.813 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:30.813 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:30.813 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:30.813 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.813 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:30.813 Waiting for all controllers to trigger AER and reset threshold 00:08:30.813 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.813 Cleaning up... 00:08:30.813 ************************************ 00:08:30.813 END TEST nvme_multi_aen 00:08:30.813 ************************************ 00:08:30.813 00:08:30.813 real 0m0.427s 00:08:30.813 user 0m0.136s 00:08:30.813 sys 0m0.188s 00:08:30.813 12:40:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.813 12:40:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:30.813 12:40:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:30.813 12:40:56 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:30.813 12:40:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.813 12:40:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.813 ************************************ 00:08:30.813 START TEST nvme_startup 00:08:30.813 ************************************ 00:08:30.813 12:40:56 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:31.071 Initializing NVMe Controllers 00:08:31.071 Attached to 0000:00:10.0 00:08:31.071 Attached to 0000:00:11.0 00:08:31.072 Attached to 0000:00:13.0 00:08:31.072 Attached to 0000:00:12.0 00:08:31.072 Initialization complete. 00:08:31.072 Time used:144402.438 (us). 00:08:31.072 00:08:31.072 real 0m0.205s 00:08:31.072 user 0m0.064s 00:08:31.072 sys 0m0.096s 00:08:31.072 12:40:56 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.072 12:40:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:31.072 ************************************ 00:08:31.072 END TEST nvme_startup 00:08:31.072 ************************************ 00:08:31.072 12:40:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:31.072 12:40:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.072 12:40:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.072 12:40:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.072 ************************************ 00:08:31.072 START TEST nvme_multi_secondary 00:08:31.072 ************************************ 00:08:31.072 12:40:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:31.072 12:40:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63935 00:08:31.072 12:40:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:31.072 12:40:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63936 00:08:31.072 12:40:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:31.072 12:40:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:34.357 Initializing NVMe Controllers 00:08:34.357 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:34.357 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.357 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.357 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.357 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:34.357 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:34.357 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:34.357 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:34.357 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:34.357 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:34.357 Initialization complete. Launching workers. 00:08:34.357 ======================================================== 00:08:34.357 Latency(us) 00:08:34.357 Device Information : IOPS MiB/s Average min max 00:08:34.357 PCIE (0000:00:10.0) NSID 1 from core 2: 3300.49 12.89 4845.78 740.42 18759.69 00:08:34.357 PCIE (0000:00:11.0) NSID 1 from core 2: 3300.49 12.89 4846.97 755.54 18770.06 00:08:34.357 PCIE (0000:00:13.0) NSID 1 from core 2: 3300.49 12.89 4847.50 757.88 15350.78 00:08:34.357 PCIE (0000:00:12.0) NSID 1 from core 2: 3300.49 12.89 4846.62 774.01 14538.05 00:08:34.357 PCIE (0000:00:12.0) NSID 2 from core 2: 3300.49 12.89 4847.84 780.92 13434.62 00:08:34.357 PCIE (0000:00:12.0) NSID 3 from core 2: 3300.49 12.89 4847.43 763.36 19741.67 00:08:34.357 ======================================================== 00:08:34.357 Total : 19802.96 77.36 4847.02 740.42 19741.67 00:08:34.357 00:08:34.357 12:40:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63935 00:08:34.357 Initializing NVMe Controllers 00:08:34.357 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:34.357 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.357 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.357 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.357 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:34.357 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:34.357 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:34.357 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:34.357 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:34.357 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:34.357 Initialization complete. Launching workers. 00:08:34.357 ======================================================== 00:08:34.357 Latency(us) 00:08:34.357 Device Information : IOPS MiB/s Average min max 00:08:34.357 PCIE (0000:00:10.0) NSID 1 from core 1: 7855.72 30.69 2035.32 712.98 9880.27 00:08:34.357 PCIE (0000:00:11.0) NSID 1 from core 1: 7855.72 30.69 2036.23 741.36 10309.91 00:08:34.357 PCIE (0000:00:13.0) NSID 1 from core 1: 7855.72 30.69 2036.18 730.75 9424.54 00:08:34.357 PCIE (0000:00:12.0) NSID 1 from core 1: 7855.72 30.69 2036.14 730.21 10483.10 00:08:34.357 PCIE (0000:00:12.0) NSID 2 from core 1: 7855.72 30.69 2036.10 729.74 9318.87 00:08:34.357 PCIE (0000:00:12.0) NSID 3 from core 1: 7855.72 30.69 2036.15 727.59 9347.58 00:08:34.357 ======================================================== 00:08:34.357 Total : 47134.30 184.12 2036.02 712.98 10483.10 00:08:34.357 00:08:36.885 Initializing NVMe Controllers 00:08:36.885 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.885 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.885 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.885 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.885 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.885 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.885 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.885 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.885 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.885 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.885 Initialization complete. Launching workers. 00:08:36.885 ======================================================== 00:08:36.885 Latency(us) 00:08:36.885 Device Information : IOPS MiB/s Average min max 00:08:36.885 PCIE (0000:00:10.0) NSID 1 from core 0: 10738.01 41.95 1488.74 673.47 8213.37 00:08:36.885 PCIE (0000:00:11.0) NSID 1 from core 0: 10738.01 41.95 1489.61 686.88 8082.00 00:08:36.885 PCIE (0000:00:13.0) NSID 1 from core 0: 10738.01 41.95 1489.58 705.44 9413.25 00:08:36.885 PCIE (0000:00:12.0) NSID 1 from core 0: 10738.01 41.95 1489.56 672.62 8760.47 00:08:36.885 PCIE (0000:00:12.0) NSID 2 from core 0: 10738.01 41.95 1489.54 644.46 11565.21 00:08:36.885 PCIE (0000:00:12.0) NSID 3 from core 0: 10738.01 41.95 1489.51 602.93 9444.92 00:08:36.885 ======================================================== 00:08:36.885 Total : 64428.04 251.67 1489.42 602.93 11565.21 00:08:36.885 00:08:36.885 12:41:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63936 00:08:36.885 12:41:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:36.885 12:41:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64011 00:08:36.885 12:41:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64012 00:08:36.885 12:41:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:36.885 12:41:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:40.168 Initializing NVMe Controllers 00:08:40.168 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.168 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.168 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.168 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.168 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:40.168 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:40.168 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:40.168 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:40.168 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:40.168 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:40.168 Initialization complete. Launching workers. 00:08:40.168 ======================================================== 00:08:40.168 Latency(us) 00:08:40.168 Device Information : IOPS MiB/s Average min max 00:08:40.168 PCIE (0000:00:10.0) NSID 1 from core 1: 4669.67 18.24 3424.80 730.17 14355.37 00:08:40.168 PCIE (0000:00:11.0) NSID 1 from core 1: 4669.67 18.24 3427.44 746.79 13291.78 00:08:40.168 PCIE (0000:00:13.0) NSID 1 from core 1: 4669.67 18.24 3428.03 743.55 13777.01 00:08:40.168 PCIE (0000:00:12.0) NSID 1 from core 1: 4669.67 18.24 3428.49 745.72 12153.24 00:08:40.168 PCIE (0000:00:12.0) NSID 2 from core 1: 4669.67 18.24 3428.92 732.86 12309.79 00:08:40.168 PCIE (0000:00:12.0) NSID 3 from core 1: 4669.67 18.24 3428.94 741.37 13680.95 00:08:40.168 ======================================================== 00:08:40.168 Total : 28018.04 109.45 3427.77 730.17 14355.37 00:08:40.168 00:08:40.168 Initializing NVMe Controllers 00:08:40.168 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.168 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.168 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.168 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.168 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:40.168 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:40.168 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:40.168 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:40.168 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:40.168 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:40.168 Initialization complete. Launching workers. 00:08:40.168 ======================================================== 00:08:40.168 Latency(us) 00:08:40.168 Device Information : IOPS MiB/s Average min max 00:08:40.168 PCIE (0000:00:10.0) NSID 1 from core 0: 4436.83 17.33 3604.51 1068.17 12261.85 00:08:40.168 PCIE (0000:00:11.0) NSID 1 from core 0: 4436.83 17.33 3605.55 1063.72 12445.00 00:08:40.168 PCIE (0000:00:13.0) NSID 1 from core 0: 4436.83 17.33 3605.42 1053.99 13506.89 00:08:40.168 PCIE (0000:00:12.0) NSID 1 from core 0: 4436.83 17.33 3605.33 1084.35 13909.91 00:08:40.168 PCIE (0000:00:12.0) NSID 2 from core 0: 4436.83 17.33 3605.23 1022.63 13296.15 00:08:40.168 PCIE (0000:00:12.0) NSID 3 from core 0: 4436.83 17.33 3605.12 908.73 12524.45 00:08:40.168 ======================================================== 00:08:40.168 Total : 26621.01 103.99 3605.19 908.73 13909.91 00:08:40.168 00:08:42.071 Initializing NVMe Controllers 00:08:42.071 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.071 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.071 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.071 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.071 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:42.071 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:42.071 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:42.071 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:42.071 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:42.071 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:42.071 Initialization complete. Launching workers. 00:08:42.071 ======================================================== 00:08:42.071 Latency(us) 00:08:42.071 Device Information : IOPS MiB/s Average min max 00:08:42.071 PCIE (0000:00:10.0) NSID 1 from core 2: 2191.77 8.56 7298.29 733.30 41500.50 00:08:42.071 PCIE (0000:00:11.0) NSID 1 from core 2: 2191.77 8.56 7300.01 758.23 35124.13 00:08:42.071 PCIE (0000:00:13.0) NSID 1 from core 2: 2191.77 8.56 7300.60 754.04 34798.42 00:08:42.071 PCIE (0000:00:12.0) NSID 1 from core 2: 2191.77 8.56 7300.13 756.07 34054.71 00:08:42.071 PCIE (0000:00:12.0) NSID 2 from core 2: 2191.77 8.56 7300.37 756.67 37196.74 00:08:42.071 PCIE (0000:00:12.0) NSID 3 from core 2: 2191.77 8.56 7299.90 761.87 36163.47 00:08:42.071 ======================================================== 00:08:42.071 Total : 13150.62 51.37 7299.88 733.30 41500.50 00:08:42.071 00:08:42.071 ************************************ 00:08:42.071 END TEST nvme_multi_secondary 00:08:42.071 ************************************ 00:08:42.071 12:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64011 00:08:42.071 12:41:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64012 00:08:42.071 00:08:42.071 real 0m10.699s 00:08:42.071 user 0m18.332s 00:08:42.071 sys 0m0.695s 00:08:42.071 12:41:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.071 12:41:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:42.071 12:41:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:42.071 12:41:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62956 ]] 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@1094 -- # kill 62956 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@1095 -- # wait 62956 00:08:42.071 [2024-11-20 12:41:07.244712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.245132] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.245361] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.245531] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.248573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.248755] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.248880] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.248972] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.251105] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.251258] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.251275] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.251287] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.253161] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.253202] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.253212] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.253224] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63884) is not found. Dropping the request. 00:08:42.071 [2024-11-20 12:41:07.536446] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:42.071 12:41:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.071 12:41:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.071 ************************************ 00:08:42.072 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:42.072 ************************************ 00:08:42.072 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:42.330 * Looking for test storage... 00:08:42.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.330 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.331 --rc genhtml_branch_coverage=1 00:08:42.331 --rc genhtml_function_coverage=1 00:08:42.331 --rc genhtml_legend=1 00:08:42.331 --rc geninfo_all_blocks=1 00:08:42.331 --rc geninfo_unexecuted_blocks=1 00:08:42.331 00:08:42.331 ' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.331 --rc genhtml_branch_coverage=1 00:08:42.331 --rc genhtml_function_coverage=1 00:08:42.331 --rc genhtml_legend=1 00:08:42.331 --rc geninfo_all_blocks=1 00:08:42.331 --rc geninfo_unexecuted_blocks=1 00:08:42.331 00:08:42.331 ' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.331 --rc genhtml_branch_coverage=1 00:08:42.331 --rc genhtml_function_coverage=1 00:08:42.331 --rc genhtml_legend=1 00:08:42.331 --rc geninfo_all_blocks=1 00:08:42.331 --rc geninfo_unexecuted_blocks=1 00:08:42.331 00:08:42.331 ' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.331 --rc genhtml_branch_coverage=1 00:08:42.331 --rc genhtml_function_coverage=1 00:08:42.331 --rc genhtml_legend=1 00:08:42.331 --rc geninfo_all_blocks=1 00:08:42.331 --rc geninfo_unexecuted_blocks=1 00:08:42.331 00:08:42.331 ' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:42.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64168 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64168 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64168 ']' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.331 12:41:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.590 [2024-11-20 12:41:07.865620] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:08:42.590 [2024-11-20 12:41:07.865759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64168 ] 00:08:42.590 [2024-11-20 12:41:08.039048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.848 [2024-11-20 12:41:08.141909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.848 [2024-11-20 12:41:08.142203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.848 [2024-11-20 12:41:08.142542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.848 [2024-11-20 12:41:08.142627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.415 nvme0n1 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_tFfb6.txt 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.415 true 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732106468 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64197 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:43.415 12:41:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.410 [2024-11-20 12:41:10.851296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:45.410 [2024-11-20 12:41:10.851574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:45.410 [2024-11-20 12:41:10.851598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:45.410 [2024-11-20 12:41:10.851612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.410 [2024-11-20 12:41:10.853639] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:45.410 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64197 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64197 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64197 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_tFfb6.txt 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:45.410 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_tFfb6.txt 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64168 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64168 ']' 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64168 00:08:45.668 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:45.669 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.669 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64168 00:08:45.669 killing process with pid 64168 00:08:45.669 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.669 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.669 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64168' 00:08:45.669 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64168 00:08:45.669 12:41:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64168 00:08:47.044 12:41:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:47.044 12:41:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:47.044 00:08:47.044 real 0m4.898s 00:08:47.044 user 0m17.276s 00:08:47.044 sys 0m0.525s 00:08:47.044 12:41:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.044 12:41:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.044 ************************************ 00:08:47.044 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:47.044 ************************************ 00:08:47.044 12:41:12 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:47.044 12:41:12 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:47.044 12:41:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.044 12:41:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.044 12:41:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.044 ************************************ 00:08:47.044 START TEST nvme_fio 00:08:47.044 ************************************ 00:08:47.044 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:47.044 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:47.044 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:47.044 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:47.044 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:47.044 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:47.044 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:47.044 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:47.044 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:47.302 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:47.302 12:41:12 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:47.302 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:47.302 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:47.302 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:47.302 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:47.302 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:47.302 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:47.303 12:41:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:47.561 12:41:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:47.561 12:41:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:47.561 12:41:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:47.820 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:47.820 fio-3.35 00:08:47.820 Starting 1 thread 00:08:55.931 00:08:55.931 test: (groupid=0, jobs=1): err= 0: pid=64337: Wed Nov 20 12:41:20 2024 00:08:55.931 read: IOPS=23.9k, BW=93.4MiB/s (97.9MB/s)(187MiB/2001msec) 00:08:55.931 slat (nsec): min=4216, max=83363, avg=4919.92, stdev=2048.20 00:08:55.931 clat (usec): min=205, max=8984, avg=2673.99, stdev=741.87 00:08:55.931 lat (usec): min=209, max=8998, avg=2678.91, stdev=743.08 00:08:55.931 clat percentiles (usec): 00:08:55.931 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2376], 00:08:55.931 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:55.931 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 3032], 95.00th=[ 4293], 00:08:55.931 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 8029], 99.95th=[ 8586], 00:08:55.931 | 99.99th=[ 8979] 00:08:55.931 bw ( KiB/s): min=91464, max=97048, per=99.56%, avg=95184.00, stdev=3221.62, samples=3 00:08:55.931 iops : min=22866, max=24262, avg=23796.00, stdev=805.40, samples=3 00:08:55.931 write: IOPS=23.8k, BW=92.8MiB/s (97.3MB/s)(186MiB/2001msec); 0 zone resets 00:08:55.931 slat (usec): min=4, max=139, avg= 5.22, stdev= 2.07 00:08:55.931 clat (usec): min=228, max=9120, avg=2675.96, stdev=744.21 00:08:55.931 lat (usec): min=233, max=9125, avg=2681.17, stdev=745.40 00:08:55.931 clat percentiles (usec): 00:08:55.931 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2376], 00:08:55.931 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:55.931 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 3032], 95.00th=[ 4293], 00:08:55.931 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 8029], 99.95th=[ 8455], 00:08:55.931 | 99.99th=[ 8979] 00:08:55.931 bw ( KiB/s): min=91144, max=98112, per=100.00%, avg=95186.67, stdev=3615.88, samples=3 00:08:55.931 iops : min=22786, max=24528, avg=23796.67, stdev=903.97, samples=3 00:08:55.931 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:08:55.931 lat (msec) : 2=0.94%, 4=93.31%, 10=5.71% 00:08:55.931 cpu : usr=99.05%, sys=0.10%, ctx=51, majf=0, minf=607 00:08:55.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:55.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.931 issued rwts: total=47828,47539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.931 00:08:55.931 Run status group 0 (all jobs): 00:08:55.931 READ: bw=93.4MiB/s (97.9MB/s), 93.4MiB/s-93.4MiB/s (97.9MB/s-97.9MB/s), io=187MiB (196MB), run=2001-2001msec 00:08:55.931 WRITE: bw=92.8MiB/s (97.3MB/s), 92.8MiB/s-92.8MiB/s (97.3MB/s-97.3MB/s), io=186MiB (195MB), run=2001-2001msec 00:08:55.931 ----------------------------------------------------- 00:08:55.931 Suppressions used: 00:08:55.931 count bytes template 00:08:55.931 1 32 /usr/src/fio/parse.c 00:08:55.931 1 8 libtcmalloc_minimal.so 00:08:55.931 ----------------------------------------------------- 00:08:55.931 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:55.931 12:41:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:55.931 12:41:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:55.931 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:55.931 fio-3.35 00:08:55.931 Starting 1 thread 00:09:02.540 00:09:02.540 test: (groupid=0, jobs=1): err= 0: pid=64393: Wed Nov 20 12:41:27 2024 00:09:02.540 read: IOPS=21.4k, BW=83.5MiB/s (87.5MB/s)(167MiB/2001msec) 00:09:02.540 slat (usec): min=3, max=176, avg= 5.76, stdev= 2.28 00:09:02.540 clat (usec): min=186, max=8101, avg=2988.80, stdev=862.73 00:09:02.540 lat (usec): min=190, max=8148, avg=2994.57, stdev=863.98 00:09:02.540 clat percentiles (usec): 00:09:02.540 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:02.540 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2835], 00:09:02.540 | 70.00th=[ 2933], 80.00th=[ 3195], 90.00th=[ 3916], 95.00th=[ 4948], 00:09:02.540 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7046], 99.95th=[ 7177], 00:09:02.540 | 99.99th=[ 7963] 00:09:02.540 bw ( KiB/s): min=85512, max=88952, per=100.00%, avg=86968.00, stdev=1779.74, samples=3 00:09:02.540 iops : min=21378, max=22238, avg=21742.00, stdev=444.94, samples=3 00:09:02.540 write: IOPS=21.2k, BW=82.9MiB/s (86.9MB/s)(166MiB/2001msec); 0 zone resets 00:09:02.540 slat (nsec): min=4016, max=72237, avg=6082.53, stdev=2236.60 00:09:02.540 clat (usec): min=193, max=8017, avg=2996.03, stdev=857.71 00:09:02.540 lat (usec): min=198, max=8038, avg=3002.11, stdev=858.98 00:09:02.540 clat percentiles (usec): 00:09:02.540 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:02.540 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2835], 00:09:02.540 | 70.00th=[ 2966], 80.00th=[ 3195], 90.00th=[ 3884], 95.00th=[ 4948], 00:09:02.540 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7111], 99.95th=[ 7242], 00:09:02.540 | 99.99th=[ 7832] 00:09:02.540 bw ( KiB/s): min=86064, max=88960, per=100.00%, avg=87173.33, stdev=1562.30, samples=3 00:09:02.540 iops : min=21516, max=22240, avg=21793.33, stdev=390.58, samples=3 00:09:02.540 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:02.540 lat (msec) : 2=0.61%, 4=89.94%, 10=9.40% 00:09:02.540 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=607 00:09:02.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.540 issued rwts: total=42766,42449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.540 00:09:02.540 Run status group 0 (all jobs): 00:09:02.540 READ: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:02.540 WRITE: bw=82.9MiB/s (86.9MB/s), 82.9MiB/s-82.9MiB/s (86.9MB/s-86.9MB/s), io=166MiB (174MB), run=2001-2001msec 00:09:02.540 ----------------------------------------------------- 00:09:02.540 Suppressions used: 00:09:02.540 count bytes template 00:09:02.540 1 32 /usr/src/fio/parse.c 00:09:02.540 1 8 libtcmalloc_minimal.so 00:09:02.540 ----------------------------------------------------- 00:09:02.540 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.540 12:41:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:02.540 12:41:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.540 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:02.540 fio-3.35 00:09:02.540 Starting 1 thread 00:09:09.099 00:09:09.099 test: (groupid=0, jobs=1): err= 0: pid=64454: Wed Nov 20 12:41:34 2024 00:09:09.099 read: IOPS=20.4k, BW=79.5MiB/s (83.4MB/s)(159MiB/2001msec) 00:09:09.099 slat (nsec): min=3350, max=80481, avg=5259.57, stdev=2786.82 00:09:09.099 clat (usec): min=225, max=14499, avg=3127.63, stdev=1220.30 00:09:09.099 lat (usec): min=229, max=14536, avg=3132.89, stdev=1221.66 00:09:09.099 clat percentiles (usec): 00:09:09.099 | 1.00th=[ 1876], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:09.099 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2802], 00:09:09.099 | 70.00th=[ 3032], 80.00th=[ 3687], 90.00th=[ 5014], 95.00th=[ 5932], 00:09:09.099 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 9110], 99.95th=[10814], 00:09:09.099 | 99.99th=[14353] 00:09:09.099 bw ( KiB/s): min=75688, max=90115, per=100.00%, avg=83070.33, stdev=7219.42, samples=3 00:09:09.099 iops : min=18922, max=22528, avg=20767.33, stdev=1804.49, samples=3 00:09:09.099 write: IOPS=20.3k, BW=79.3MiB/s (83.2MB/s)(159MiB/2001msec); 0 zone resets 00:09:09.099 slat (nsec): min=3439, max=72197, avg=5415.32, stdev=2758.46 00:09:09.099 clat (usec): min=209, max=14396, avg=3141.96, stdev=1214.35 00:09:09.099 lat (usec): min=213, max=14410, avg=3147.37, stdev=1215.70 00:09:09.099 clat percentiles (usec): 00:09:09.099 | 1.00th=[ 1909], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:09.099 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2835], 00:09:09.099 | 70.00th=[ 3064], 80.00th=[ 3752], 90.00th=[ 5014], 95.00th=[ 5866], 00:09:09.099 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[ 9110], 99.95th=[11469], 00:09:09.099 | 99.99th=[13566] 00:09:09.099 bw ( KiB/s): min=76096, max=89789, per=100.00%, avg=83095.00, stdev=6851.59, samples=3 00:09:09.099 iops : min=19024, max=22447, avg=20773.67, stdev=1712.78, samples=3 00:09:09.099 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:09.099 lat (msec) : 2=1.69%, 4=80.54%, 10=17.64%, 20=0.08% 00:09:09.099 cpu : usr=98.85%, sys=0.15%, ctx=14, majf=0, minf=607 00:09:09.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.099 issued rwts: total=40750,40641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.099 00:09:09.099 Run status group 0 (all jobs): 00:09:09.099 READ: bw=79.5MiB/s (83.4MB/s), 79.5MiB/s-79.5MiB/s (83.4MB/s-83.4MB/s), io=159MiB (167MB), run=2001-2001msec 00:09:09.099 WRITE: bw=79.3MiB/s (83.2MB/s), 79.3MiB/s-79.3MiB/s (83.2MB/s-83.2MB/s), io=159MiB (166MB), run=2001-2001msec 00:09:09.099 ----------------------------------------------------- 00:09:09.099 Suppressions used: 00:09:09.099 count bytes template 00:09:09.099 1 32 /usr/src/fio/parse.c 00:09:09.099 1 8 libtcmalloc_minimal.so 00:09:09.099 ----------------------------------------------------- 00:09:09.099 00:09:09.099 12:41:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.099 12:41:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:09.099 12:41:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:09.099 12:41:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:09.358 12:41:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:09.358 12:41:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:09.616 12:41:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:09.616 12:41:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:09.616 12:41:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:09.875 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:09.875 fio-3.35 00:09:09.875 Starting 1 thread 00:09:17.993 00:09:17.993 test: (groupid=0, jobs=1): err= 0: pid=64520: Wed Nov 20 12:41:43 2024 00:09:17.993 read: IOPS=18.4k, BW=71.9MiB/s (75.3MB/s)(144MiB/2001msec) 00:09:17.993 slat (nsec): min=4244, max=79201, avg=5514.77, stdev=2743.35 00:09:17.993 clat (usec): min=506, max=11158, avg=3461.14, stdev=1284.64 00:09:17.993 lat (usec): min=511, max=11195, avg=3466.66, stdev=1285.76 00:09:17.993 clat percentiles (usec): 00:09:17.993 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2507], 00:09:17.993 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 3130], 00:09:17.993 | 70.00th=[ 3785], 80.00th=[ 4621], 90.00th=[ 5473], 95.00th=[ 6128], 00:09:17.993 | 99.00th=[ 7308], 99.50th=[ 7701], 99.90th=[ 8717], 99.95th=[ 8848], 00:09:17.993 | 99.99th=[11076] 00:09:17.993 bw ( KiB/s): min=65149, max=88288, per=100.00%, avg=76684.33, stdev=11569.65, samples=3 00:09:17.993 iops : min=16287, max=22072, avg=19171.00, stdev=2892.54, samples=3 00:09:17.993 write: IOPS=18.4k, BW=71.9MiB/s (75.4MB/s)(144MiB/2001msec); 0 zone resets 00:09:17.993 slat (nsec): min=4308, max=89629, avg=5662.31, stdev=2795.21 00:09:17.993 clat (usec): min=497, max=11096, avg=3472.40, stdev=1288.48 00:09:17.993 lat (usec): min=502, max=11110, avg=3478.06, stdev=1289.56 00:09:17.993 clat percentiles (usec): 00:09:17.993 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:17.993 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 3163], 00:09:17.993 | 70.00th=[ 3785], 80.00th=[ 4621], 90.00th=[ 5473], 95.00th=[ 6128], 00:09:17.993 | 99.00th=[ 7308], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 8848], 00:09:17.993 | 99.99th=[10945] 00:09:17.993 bw ( KiB/s): min=65604, max=88424, per=100.00%, avg=76809.33, stdev=11415.51, samples=3 00:09:17.993 iops : min=16401, max=22106, avg=19202.33, stdev=2853.88, samples=3 00:09:17.993 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:17.993 lat (msec) : 2=0.63%, 4=72.01%, 10=27.33%, 20=0.02% 00:09:17.993 cpu : usr=98.95%, sys=0.05%, ctx=6, majf=0, minf=605 00:09:17.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:17.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.993 issued rwts: total=36806,36817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.993 00:09:17.993 Run status group 0 (all jobs): 00:09:17.993 READ: bw=71.9MiB/s (75.3MB/s), 71.9MiB/s-71.9MiB/s (75.3MB/s-75.3MB/s), io=144MiB (151MB), run=2001-2001msec 00:09:17.993 WRITE: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=144MiB (151MB), run=2001-2001msec 00:09:17.993 ----------------------------------------------------- 00:09:17.993 Suppressions used: 00:09:17.993 count bytes template 00:09:17.993 1 32 /usr/src/fio/parse.c 00:09:17.993 1 8 libtcmalloc_minimal.so 00:09:17.993 ----------------------------------------------------- 00:09:17.993 00:09:17.993 12:41:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:17.993 12:41:43 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:17.993 00:09:17.993 real 0m30.864s 00:09:17.993 user 0m22.160s 00:09:17.993 sys 0m13.979s 00:09:17.993 12:41:43 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.993 ************************************ 00:09:17.993 12:41:43 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 END TEST nvme_fio 00:09:17.993 ************************************ 00:09:17.993 00:09:17.993 real 1m41.324s 00:09:17.993 user 3m44.958s 00:09:17.993 sys 0m24.646s 00:09:17.993 12:41:43 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.993 12:41:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 ************************************ 00:09:17.993 END TEST nvme 00:09:17.993 ************************************ 00:09:17.993 12:41:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:17.993 12:41:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:17.993 12:41:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.993 12:41:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.993 12:41:43 -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 ************************************ 00:09:17.993 START TEST nvme_scc 00:09:17.993 ************************************ 00:09:17.994 12:41:43 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:18.252 * Looking for test storage... 00:09:18.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.252 12:41:43 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.252 12:41:43 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.252 12:41:43 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.252 12:41:43 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.252 12:41:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:18.252 12:41:43 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.252 12:41:43 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.252 --rc genhtml_branch_coverage=1 00:09:18.252 --rc genhtml_function_coverage=1 00:09:18.252 --rc genhtml_legend=1 00:09:18.252 --rc geninfo_all_blocks=1 00:09:18.252 --rc geninfo_unexecuted_blocks=1 00:09:18.252 00:09:18.252 ' 00:09:18.252 12:41:43 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.253 --rc genhtml_branch_coverage=1 00:09:18.253 --rc genhtml_function_coverage=1 00:09:18.253 --rc genhtml_legend=1 00:09:18.253 --rc geninfo_all_blocks=1 00:09:18.253 --rc geninfo_unexecuted_blocks=1 00:09:18.253 00:09:18.253 ' 00:09:18.253 12:41:43 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.253 --rc genhtml_branch_coverage=1 00:09:18.253 --rc genhtml_function_coverage=1 00:09:18.253 --rc genhtml_legend=1 00:09:18.253 --rc geninfo_all_blocks=1 00:09:18.253 --rc geninfo_unexecuted_blocks=1 00:09:18.253 00:09:18.253 ' 00:09:18.253 12:41:43 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.253 --rc genhtml_branch_coverage=1 00:09:18.253 --rc genhtml_function_coverage=1 00:09:18.253 --rc genhtml_legend=1 00:09:18.253 --rc geninfo_all_blocks=1 00:09:18.253 --rc geninfo_unexecuted_blocks=1 00:09:18.253 00:09:18.253 ' 00:09:18.253 12:41:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.253 12:41:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.253 12:41:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.253 12:41:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.253 12:41:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.253 12:41:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.253 12:41:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.253 12:41:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.253 12:41:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:18.253 12:41:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:18.253 12:41:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:18.253 12:41:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.253 12:41:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:18.253 12:41:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:18.253 12:41:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:18.253 12:41:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.770 Waiting for block devices as requested 00:09:18.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.770 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.770 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.028 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.330 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.330 12:41:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:24.330 12:41:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.330 12:41:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:24.330 12:41:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.330 12:41:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.330 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.331 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.332 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:24.333 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.334 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.335 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.336 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.337 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:24.338 12:41:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.338 12:41:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:24.338 12:41:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.338 12:41:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.338 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:24.339 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:24.340 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.341 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.342 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.343 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.344 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:24.345 12:41:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.345 12:41:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:24.345 12:41:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.345 12:41:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:24.345 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.346 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.347 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.348 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:24.349 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:24.350 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:24.351 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.352 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.353 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.354 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.355 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.356 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.357 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:24.358 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.359 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.360 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:24.361 12:41:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.361 12:41:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:24.361 12:41:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.361 12:41:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.361 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.621 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.622 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:24.623 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:24.624 12:41:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:24.624 12:41:49 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:24.624 12:41:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:24.624 12:41:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:24.624 12:41:49 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.449 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.449 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.449 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.449 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.449 12:41:50 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:25.449 12:41:50 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:25.449 12:41:50 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.449 12:41:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:25.449 ************************************ 00:09:25.449 START TEST nvme_simple_copy 00:09:25.449 ************************************ 00:09:25.449 12:41:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:25.707 Initializing NVMe Controllers 00:09:25.707 Attaching to 0000:00:10.0 00:09:25.707 Controller supports SCC. Attached to 0000:00:10.0 00:09:25.707 Namespace ID: 1 size: 6GB 00:09:25.707 Initialization complete. 00:09:25.707 00:09:25.707 Controller QEMU NVMe Ctrl (12340 ) 00:09:25.707 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:25.707 Namespace Block Size:4096 00:09:25.707 Writing LBAs 0 to 63 with Random Data 00:09:25.707 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:25.707 LBAs matching Written Data: 64 00:09:25.707 ************************************ 00:09:25.707 END TEST nvme_simple_copy 00:09:25.707 ************************************ 00:09:25.707 00:09:25.707 real 0m0.261s 00:09:25.707 user 0m0.091s 00:09:25.707 sys 0m0.068s 00:09:25.707 12:41:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.707 12:41:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:25.965 ************************************ 00:09:25.965 END TEST nvme_scc 00:09:25.965 ************************************ 00:09:25.965 00:09:25.965 real 0m7.816s 00:09:25.965 user 0m1.129s 00:09:25.965 sys 0m1.443s 00:09:25.965 12:41:51 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.965 12:41:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:25.965 12:41:51 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:25.965 12:41:51 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:25.965 12:41:51 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:25.965 12:41:51 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:25.965 12:41:51 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:25.965 12:41:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.965 12:41:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.965 12:41:51 -- common/autotest_common.sh@10 -- # set +x 00:09:25.965 ************************************ 00:09:25.965 START TEST nvme_fdp 00:09:25.965 ************************************ 00:09:25.965 12:41:51 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:25.965 * Looking for test storage... 00:09:25.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.965 12:41:51 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.965 12:41:51 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.965 12:41:51 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.965 12:41:51 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:25.965 12:41:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:25.966 12:41:51 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.966 12:41:51 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.966 --rc genhtml_branch_coverage=1 00:09:25.966 --rc genhtml_function_coverage=1 00:09:25.966 --rc genhtml_legend=1 00:09:25.966 --rc geninfo_all_blocks=1 00:09:25.966 --rc geninfo_unexecuted_blocks=1 00:09:25.966 00:09:25.966 ' 00:09:25.966 12:41:51 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.966 --rc genhtml_branch_coverage=1 00:09:25.966 --rc genhtml_function_coverage=1 00:09:25.966 --rc genhtml_legend=1 00:09:25.966 --rc geninfo_all_blocks=1 00:09:25.966 --rc geninfo_unexecuted_blocks=1 00:09:25.966 00:09:25.966 ' 00:09:25.966 12:41:51 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.966 --rc genhtml_branch_coverage=1 00:09:25.966 --rc genhtml_function_coverage=1 00:09:25.966 --rc genhtml_legend=1 00:09:25.966 --rc geninfo_all_blocks=1 00:09:25.966 --rc geninfo_unexecuted_blocks=1 00:09:25.966 00:09:25.966 ' 00:09:25.966 12:41:51 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.966 --rc genhtml_branch_coverage=1 00:09:25.966 --rc genhtml_function_coverage=1 00:09:25.966 --rc genhtml_legend=1 00:09:25.966 --rc geninfo_all_blocks=1 00:09:25.966 --rc geninfo_unexecuted_blocks=1 00:09:25.966 00:09:25.966 ' 00:09:25.966 12:41:51 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.966 12:41:51 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.966 12:41:51 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.966 12:41:51 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.966 12:41:51 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.966 12:41:51 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:25.966 12:41:51 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:25.966 12:41:51 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:25.966 12:41:51 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.966 12:41:51 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:26.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.532 Waiting for block devices as requested 00:09:26.532 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.532 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.790 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:26.790 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.074 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.074 12:41:57 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.074 12:41:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.074 12:41:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.074 12:41:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.074 12:41:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.074 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:32.075 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:32.076 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.077 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.078 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:32.079 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:32.080 12:41:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.080 12:41:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.080 12:41:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.080 12:41:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:32.080 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.081 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.082 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.083 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.084 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.085 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.086 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:32.087 12:41:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.087 12:41:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.087 12:41:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.087 12:41:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:32.087 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:32.088 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:32.089 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.090 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:32.091 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:32.092 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.093 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.094 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.095 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.096 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:32.097 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.098 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.099 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:32.100 12:41:57 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.100 12:41:57 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.100 12:41:57 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.100 12:41:57 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:32.100 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:32.101 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.102 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.103 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:32.362 12:41:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.362 12:41:57 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:32.363 12:41:57 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:32.363 12:41:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:32.363 12:41:57 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:32.363 12:41:57 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.188 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.188 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.188 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.188 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.188 12:41:58 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:33.188 12:41:58 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.188 12:41:58 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.188 12:41:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:33.188 ************************************ 00:09:33.188 START TEST nvme_flexible_data_placement 00:09:33.188 ************************************ 00:09:33.188 12:41:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:33.447 Initializing NVMe Controllers 00:09:33.447 Attaching to 0000:00:13.0 00:09:33.447 Controller supports FDP Attached to 0000:00:13.0 00:09:33.447 Namespace ID: 1 Endurance Group ID: 1 00:09:33.447 Initialization complete. 00:09:33.447 00:09:33.447 ================================== 00:09:33.447 == FDP tests for Namespace: #01 == 00:09:33.447 ================================== 00:09:33.447 00:09:33.447 Get Feature: FDP: 00:09:33.447 ================= 00:09:33.447 Enabled: Yes 00:09:33.447 FDP configuration Index: 0 00:09:33.447 00:09:33.447 FDP configurations log page 00:09:33.447 =========================== 00:09:33.447 Number of FDP configurations: 1 00:09:33.447 Version: 0 00:09:33.447 Size: 112 00:09:33.447 FDP Configuration Descriptor: 0 00:09:33.447 Descriptor Size: 96 00:09:33.447 Reclaim Group Identifier format: 2 00:09:33.447 FDP Volatile Write Cache: Not Present 00:09:33.447 FDP Configuration: Valid 00:09:33.447 Vendor Specific Size: 0 00:09:33.447 Number of Reclaim Groups: 2 00:09:33.447 Number of Recalim Unit Handles: 8 00:09:33.447 Max Placement Identifiers: 128 00:09:33.447 Number of Namespaces Suppprted: 256 00:09:33.447 Reclaim unit Nominal Size: 6000000 bytes 00:09:33.447 Estimated Reclaim Unit Time Limit: Not Reported 00:09:33.447 RUH Desc #000: RUH Type: Initially Isolated 00:09:33.447 RUH Desc #001: RUH Type: Initially Isolated 00:09:33.447 RUH Desc #002: RUH Type: Initially Isolated 00:09:33.447 RUH Desc #003: RUH Type: Initially Isolated 00:09:33.447 RUH Desc #004: RUH Type: Initially Isolated 00:09:33.447 RUH Desc #005: RUH Type: Initially Isolated 00:09:33.447 RUH Desc #006: RUH Type: Initially Isolated 00:09:33.447 RUH Desc #007: RUH Type: Initially Isolated 00:09:33.447 00:09:33.447 FDP reclaim unit handle usage log page 00:09:33.447 ====================================== 00:09:33.447 Number of Reclaim Unit Handles: 8 00:09:33.447 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:33.447 RUH Usage Desc #001: RUH Attributes: Unused 00:09:33.447 RUH Usage Desc #002: RUH Attributes: Unused 00:09:33.447 RUH Usage Desc #003: RUH Attributes: Unused 00:09:33.447 RUH Usage Desc #004: RUH Attributes: Unused 00:09:33.447 RUH Usage Desc #005: RUH Attributes: Unused 00:09:33.447 RUH Usage Desc #006: RUH Attributes: Unused 00:09:33.447 RUH Usage Desc #007: RUH Attributes: Unused 00:09:33.447 00:09:33.447 FDP statistics log page 00:09:33.447 ======================= 00:09:33.447 Host bytes with metadata written: 983154688 00:09:33.447 Media bytes with metadata written: 983429120 00:09:33.447 Media bytes erased: 0 00:09:33.447 00:09:33.447 FDP Reclaim unit handle status 00:09:33.447 ============================== 00:09:33.447 Number of RUHS descriptors: 2 00:09:33.447 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001664 00:09:33.447 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:33.447 00:09:33.447 FDP write on placement id: 0 success 00:09:33.447 00:09:33.447 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:33.447 00:09:33.447 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:33.447 00:09:33.447 Get Feature: FDP Events for Placement handle: #0 00:09:33.447 ======================== 00:09:33.447 Number of FDP Events: 6 00:09:33.447 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:33.447 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:33.447 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:33.447 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:33.447 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:33.447 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:33.447 00:09:33.447 FDP events log page 00:09:33.447 =================== 00:09:33.447 Number of FDP events: 1 00:09:33.447 FDP Event #0: 00:09:33.447 Event Type: RU Not Written to Capacity 00:09:33.447 Placement Identifier: Valid 00:09:33.447 NSID: Valid 00:09:33.447 Location: Valid 00:09:33.447 Placement Identifier: 0 00:09:33.447 Event Timestamp: 10 00:09:33.447 Namespace Identifier: 1 00:09:33.447 Reclaim Group Identifier: 0 00:09:33.447 Reclaim Unit Handle Identifier: 0 00:09:33.447 00:09:33.447 FDP test passed 00:09:33.447 00:09:33.447 real 0m0.251s 00:09:33.447 user 0m0.088s 00:09:33.447 sys 0m0.061s 00:09:33.447 12:41:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.447 ************************************ 00:09:33.447 END TEST nvme_flexible_data_placement 00:09:33.447 ************************************ 00:09:33.447 12:41:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:33.447 ************************************ 00:09:33.447 END TEST nvme_fdp 00:09:33.447 ************************************ 00:09:33.447 00:09:33.447 real 0m7.581s 00:09:33.447 user 0m1.077s 00:09:33.448 sys 0m1.377s 00:09:33.448 12:41:58 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.448 12:41:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:33.448 12:41:58 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:33.448 12:41:58 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:33.448 12:41:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.448 12:41:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.448 12:41:58 -- common/autotest_common.sh@10 -- # set +x 00:09:33.448 ************************************ 00:09:33.448 START TEST nvme_rpc 00:09:33.448 ************************************ 00:09:33.448 12:41:58 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:33.707 * Looking for test storage... 00:09:33.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.707 12:41:59 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.707 --rc genhtml_branch_coverage=1 00:09:33.707 --rc genhtml_function_coverage=1 00:09:33.707 --rc genhtml_legend=1 00:09:33.707 --rc geninfo_all_blocks=1 00:09:33.707 --rc geninfo_unexecuted_blocks=1 00:09:33.707 00:09:33.707 ' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.707 --rc genhtml_branch_coverage=1 00:09:33.707 --rc genhtml_function_coverage=1 00:09:33.707 --rc genhtml_legend=1 00:09:33.707 --rc geninfo_all_blocks=1 00:09:33.707 --rc geninfo_unexecuted_blocks=1 00:09:33.707 00:09:33.707 ' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.707 --rc genhtml_branch_coverage=1 00:09:33.707 --rc genhtml_function_coverage=1 00:09:33.707 --rc genhtml_legend=1 00:09:33.707 --rc geninfo_all_blocks=1 00:09:33.707 --rc geninfo_unexecuted_blocks=1 00:09:33.707 00:09:33.707 ' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.707 --rc genhtml_branch_coverage=1 00:09:33.707 --rc genhtml_function_coverage=1 00:09:33.707 --rc genhtml_legend=1 00:09:33.707 --rc geninfo_all_blocks=1 00:09:33.707 --rc geninfo_unexecuted_blocks=1 00:09:33.707 00:09:33.707 ' 00:09:33.707 12:41:59 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.707 12:41:59 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:33.707 12:41:59 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:33.707 12:41:59 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65895 00:09:33.707 12:41:59 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:33.707 12:41:59 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65895 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65895 ']' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.707 12:41:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.707 12:41:59 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:33.966 [2024-11-20 12:41:59.242559] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:09:33.966 [2024-11-20 12:41:59.242678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65895 ] 00:09:33.966 [2024-11-20 12:41:59.400152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.224 [2024-11-20 12:41:59.501559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.224 [2024-11-20 12:41:59.501811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.794 12:42:00 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.794 12:42:00 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:34.794 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:35.053 Nvme0n1 00:09:35.053 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:35.053 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:35.053 request: 00:09:35.053 { 00:09:35.053 "bdev_name": "Nvme0n1", 00:09:35.053 "filename": "non_existing_file", 00:09:35.053 "method": "bdev_nvme_apply_firmware", 00:09:35.053 "req_id": 1 00:09:35.053 } 00:09:35.053 Got JSON-RPC error response 00:09:35.053 response: 00:09:35.053 { 00:09:35.053 "code": -32603, 00:09:35.053 "message": "open file failed." 00:09:35.053 } 00:09:35.053 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:35.053 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:35.053 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:35.313 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:35.313 12:42:00 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65895 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65895 ']' 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65895 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65895 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.313 killing process with pid 65895 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65895' 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65895 00:09:35.313 12:42:00 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65895 00:09:36.698 00:09:36.698 real 0m3.119s 00:09:36.698 user 0m5.938s 00:09:36.698 sys 0m0.500s 00:09:36.698 ************************************ 00:09:36.698 END TEST nvme_rpc 00:09:36.698 ************************************ 00:09:36.698 12:42:02 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.698 12:42:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.698 12:42:02 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:36.698 12:42:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.698 12:42:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.699 12:42:02 -- common/autotest_common.sh@10 -- # set +x 00:09:36.699 ************************************ 00:09:36.699 START TEST nvme_rpc_timeouts 00:09:36.699 ************************************ 00:09:36.699 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:36.699 * Looking for test storage... 00:09:36.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.699 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.699 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.699 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.960 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.960 12:42:02 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:36.960 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.960 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.960 --rc genhtml_branch_coverage=1 00:09:36.960 --rc genhtml_function_coverage=1 00:09:36.960 --rc genhtml_legend=1 00:09:36.960 --rc geninfo_all_blocks=1 00:09:36.960 --rc geninfo_unexecuted_blocks=1 00:09:36.960 00:09:36.960 ' 00:09:36.960 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.960 --rc genhtml_branch_coverage=1 00:09:36.960 --rc genhtml_function_coverage=1 00:09:36.960 --rc genhtml_legend=1 00:09:36.960 --rc geninfo_all_blocks=1 00:09:36.960 --rc geninfo_unexecuted_blocks=1 00:09:36.960 00:09:36.960 ' 00:09:36.960 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.960 --rc genhtml_branch_coverage=1 00:09:36.960 --rc genhtml_function_coverage=1 00:09:36.960 --rc genhtml_legend=1 00:09:36.960 --rc geninfo_all_blocks=1 00:09:36.960 --rc geninfo_unexecuted_blocks=1 00:09:36.960 00:09:36.960 ' 00:09:36.960 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.960 --rc genhtml_branch_coverage=1 00:09:36.960 --rc genhtml_function_coverage=1 00:09:36.960 --rc genhtml_legend=1 00:09:36.960 --rc geninfo_all_blocks=1 00:09:36.960 --rc geninfo_unexecuted_blocks=1 00:09:36.960 00:09:36.960 ' 00:09:36.960 12:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.960 12:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65955 00:09:36.960 12:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65955 00:09:36.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.961 12:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65992 00:09:36.961 12:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:36.961 12:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65992 00:09:36.961 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65992 ']' 00:09:36.961 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.961 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.961 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.961 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.961 12:42:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:36.961 12:42:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:36.961 [2024-11-20 12:42:02.349996] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:09:36.961 [2024-11-20 12:42:02.350121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65992 ] 00:09:37.221 [2024-11-20 12:42:02.510472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.221 [2024-11-20 12:42:02.614532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.221 [2024-11-20 12:42:02.614613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.789 Checking default timeout settings: 00:09:37.789 12:42:03 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.789 12:42:03 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:37.789 12:42:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:37.789 12:42:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:38.356 Making settings changes with rpc: 00:09:38.356 12:42:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:38.356 12:42:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:38.356 Check default vs. modified settings: 00:09:38.356 12:42:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:38.356 12:42:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:38.614 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:38.614 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.614 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65955 00:09:38.614 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.614 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65955 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:38.872 Setting action_on_timeout is changed as expected. 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65955 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65955 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.872 Setting timeout_us is changed as expected. 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:38.872 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65955 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65955 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:38.873 Setting timeout_admin_us is changed as expected. 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65955 /tmp/settings_modified_65955 00:09:38.873 12:42:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65992 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65992 ']' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65992 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65992 00:09:38.873 killing process with pid 65992 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65992' 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65992 00:09:38.873 12:42:04 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65992 00:09:40.247 RPC TIMEOUT SETTING TEST PASSED. 00:09:40.247 12:42:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:40.247 00:09:40.248 real 0m3.566s 00:09:40.248 user 0m6.842s 00:09:40.248 sys 0m0.565s 00:09:40.248 ************************************ 00:09:40.248 END TEST nvme_rpc_timeouts 00:09:40.248 ************************************ 00:09:40.248 12:42:05 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.248 12:42:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:40.248 12:42:05 -- spdk/autotest.sh@239 -- # uname -s 00:09:40.248 12:42:05 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:40.248 12:42:05 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:40.248 12:42:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.248 12:42:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.248 12:42:05 -- common/autotest_common.sh@10 -- # set +x 00:09:40.506 ************************************ 00:09:40.506 START TEST sw_hotplug 00:09:40.506 ************************************ 00:09:40.506 12:42:05 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:40.506 * Looking for test storage... 00:09:40.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.506 12:42:05 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.506 12:42:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.506 12:42:05 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:40.506 12:42:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:40.506 12:42:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.507 12:42:05 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:40.507 12:42:05 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.507 12:42:05 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.507 12:42:05 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.507 12:42:05 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:40.507 12:42:05 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.507 12:42:05 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.507 --rc genhtml_branch_coverage=1 00:09:40.507 --rc genhtml_function_coverage=1 00:09:40.507 --rc genhtml_legend=1 00:09:40.507 --rc geninfo_all_blocks=1 00:09:40.507 --rc geninfo_unexecuted_blocks=1 00:09:40.507 00:09:40.507 ' 00:09:40.507 12:42:05 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.507 --rc genhtml_branch_coverage=1 00:09:40.507 --rc genhtml_function_coverage=1 00:09:40.507 --rc genhtml_legend=1 00:09:40.507 --rc geninfo_all_blocks=1 00:09:40.507 --rc geninfo_unexecuted_blocks=1 00:09:40.507 00:09:40.507 ' 00:09:40.507 12:42:05 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.507 --rc genhtml_branch_coverage=1 00:09:40.507 --rc genhtml_function_coverage=1 00:09:40.507 --rc genhtml_legend=1 00:09:40.507 --rc geninfo_all_blocks=1 00:09:40.507 --rc geninfo_unexecuted_blocks=1 00:09:40.507 00:09:40.507 ' 00:09:40.507 12:42:05 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.507 --rc genhtml_branch_coverage=1 00:09:40.507 --rc genhtml_function_coverage=1 00:09:40.507 --rc genhtml_legend=1 00:09:40.507 --rc geninfo_all_blocks=1 00:09:40.507 --rc geninfo_unexecuted_blocks=1 00:09:40.507 00:09:40.507 ' 00:09:40.507 12:42:05 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.023 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.023 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.023 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.023 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.023 12:42:06 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:41.023 12:42:06 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:41.023 12:42:06 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:41.023 12:42:06 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.023 12:42:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:41.024 12:42:06 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:41.024 12:42:06 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:41.024 12:42:06 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:41.024 12:42:06 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:41.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.541 Waiting for block devices as requested 00:09:41.541 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.541 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.800 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.800 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.063 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:47.063 12:42:12 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:47.063 12:42:12 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:47.321 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:47.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.321 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:47.578 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:47.835 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.835 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:47.835 12:42:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66851 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:47.835 12:42:13 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:47.835 12:42:13 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:47.835 12:42:13 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:47.835 12:42:13 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:47.835 12:42:13 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:47.835 12:42:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:48.093 Initializing NVMe Controllers 00:09:48.093 Attaching to 0000:00:10.0 00:09:48.093 Attaching to 0000:00:11.0 00:09:48.093 Attached to 0000:00:10.0 00:09:48.093 Attached to 0000:00:11.0 00:09:48.093 Initialization complete. Starting I/O... 00:09:48.093 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:48.093 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:48.093 00:09:49.026 QEMU NVMe Ctrl (12340 ): 2486 I/Os completed (+2486) 00:09:49.026 QEMU NVMe Ctrl (12341 ): 2501 I/Os completed (+2501) 00:09:49.026 00:09:50.399 QEMU NVMe Ctrl (12340 ): 5562 I/Os completed (+3076) 00:09:50.400 QEMU NVMe Ctrl (12341 ): 5569 I/Os completed (+3068) 00:09:50.400 00:09:51.333 QEMU NVMe Ctrl (12340 ): 8665 I/Os completed (+3103) 00:09:51.333 QEMU NVMe Ctrl (12341 ): 8648 I/Os completed (+3079) 00:09:51.333 00:09:52.267 QEMU NVMe Ctrl (12340 ): 12181 I/Os completed (+3516) 00:09:52.267 QEMU NVMe Ctrl (12341 ): 12132 I/Os completed (+3484) 00:09:52.267 00:09:53.199 QEMU NVMe Ctrl (12340 ): 15846 I/Os completed (+3665) 00:09:53.199 QEMU NVMe Ctrl (12341 ): 15815 I/Os completed (+3683) 00:09:53.199 00:09:54.132 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:54.132 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:54.132 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:54.132 [2024-11-20 12:42:19.323031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:54.132 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:54.132 [2024-11-20 12:42:19.323970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.324008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.324021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.324037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:54.132 [2024-11-20 12:42:19.325614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.325657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.325669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.325681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:09:54.132 EAL: Scan for (pci) bus failed. 00:09:54.132 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:54.132 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:54.132 [2024-11-20 12:42:19.345690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:54.132 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:54.132 [2024-11-20 12:42:19.346629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.346665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.346684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.346697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:54.132 [2024-11-20 12:42:19.348048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.132 [2024-11-20 12:42:19.348078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.133 [2024-11-20 12:42:19.348091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.133 [2024-11-20 12:42:19.348103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:54.133 Attaching to 0000:00:10.0 00:09:54.133 Attached to 0000:00:10.0 00:09:54.133 QEMU NVMe Ctrl (12340 ): 4 I/Os completed (+4) 00:09:54.133 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:54.133 12:42:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:54.133 Attaching to 0000:00:11.0 00:09:54.133 Attached to 0000:00:11.0 00:09:55.064 QEMU NVMe Ctrl (12340 ): 3417 I/Os completed (+3413) 00:09:55.064 QEMU NVMe Ctrl (12341 ): 3026 I/Os completed (+3026) 00:09:55.064 00:09:56.024 QEMU NVMe Ctrl (12340 ): 6553 I/Os completed (+3136) 00:09:56.024 QEMU NVMe Ctrl (12341 ): 6192 I/Os completed (+3166) 00:09:56.024 00:09:57.396 QEMU NVMe Ctrl (12340 ): 10161 I/Os completed (+3608) 00:09:57.396 QEMU NVMe Ctrl (12341 ): 9792 I/Os completed (+3600) 00:09:57.396 00:09:58.329 QEMU NVMe Ctrl (12340 ): 13763 I/Os completed (+3602) 00:09:58.329 QEMU NVMe Ctrl (12341 ): 13397 I/Os completed (+3605) 00:09:58.329 00:09:59.262 QEMU NVMe Ctrl (12340 ): 17381 I/Os completed (+3618) 00:09:59.262 QEMU NVMe Ctrl (12341 ): 17024 I/Os completed (+3627) 00:09:59.262 00:10:00.197 QEMU NVMe Ctrl (12340 ): 20965 I/Os completed (+3584) 00:10:00.197 QEMU NVMe Ctrl (12341 ): 20615 I/Os completed (+3591) 00:10:00.197 00:10:01.131 QEMU NVMe Ctrl (12340 ): 24533 I/Os completed (+3568) 00:10:01.131 QEMU NVMe Ctrl (12341 ): 24207 I/Os completed (+3592) 00:10:01.131 00:10:02.063 QEMU NVMe Ctrl (12340 ): 28144 I/Os completed (+3611) 00:10:02.063 QEMU NVMe Ctrl (12341 ): 27815 I/Os completed (+3608) 00:10:02.063 00:10:03.031 QEMU NVMe Ctrl (12340 ): 31295 I/Os completed (+3151) 00:10:03.031 QEMU NVMe Ctrl (12341 ): 30922 I/Os completed (+3107) 00:10:03.031 00:10:04.403 QEMU NVMe Ctrl (12340 ): 34433 I/Os completed (+3138) 00:10:04.403 QEMU NVMe Ctrl (12341 ): 34049 I/Os completed (+3127) 00:10:04.403 00:10:05.336 QEMU NVMe Ctrl (12340 ): 37931 I/Os completed (+3498) 00:10:05.336 QEMU NVMe Ctrl (12341 ): 37606 I/Os completed (+3557) 00:10:05.336 00:10:06.272 QEMU NVMe Ctrl (12340 ): 40929 I/Os completed (+2998) 00:10:06.272 QEMU NVMe Ctrl (12341 ): 40623 I/Os completed (+3017) 00:10:06.272 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.272 [2024-11-20 12:42:31.607131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:06.272 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:06.272 [2024-11-20 12:42:31.608388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.608519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.608556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.608629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:06.272 [2024-11-20 12:42:31.610628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.610698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.610726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.610768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.272 [2024-11-20 12:42:31.630783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:06.272 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:06.272 [2024-11-20 12:42:31.632002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.632061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.632096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.632211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.272 [2024-11-20 12:42:31.635523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.635620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.635688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 [2024-11-20 12:42:31.635719] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:06.272 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:06.272 EAL: Scan for (pci) bus failed. 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.272 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:06.272 Attaching to 0000:00:10.0 00:10:06.272 Attached to 0000:00:10.0 00:10:06.531 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:06.531 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.531 12:42:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:06.531 Attaching to 0000:00:11.0 00:10:06.531 Attached to 0000:00:11.0 00:10:07.097 QEMU NVMe Ctrl (12340 ): 2637 I/Os completed (+2637) 00:10:07.097 QEMU NVMe Ctrl (12341 ): 2356 I/Os completed (+2356) 00:10:07.097 00:10:08.031 QEMU NVMe Ctrl (12340 ): 6199 I/Os completed (+3562) 00:10:08.031 QEMU NVMe Ctrl (12341 ): 5997 I/Os completed (+3641) 00:10:08.031 00:10:09.405 QEMU NVMe Ctrl (12340 ): 9698 I/Os completed (+3499) 00:10:09.405 QEMU NVMe Ctrl (12341 ): 9238 I/Os completed (+3241) 00:10:09.405 00:10:10.340 QEMU NVMe Ctrl (12340 ): 13102 I/Os completed (+3404) 00:10:10.340 QEMU NVMe Ctrl (12341 ): 12646 I/Os completed (+3408) 00:10:10.340 00:10:11.275 QEMU NVMe Ctrl (12340 ): 16169 I/Os completed (+3067) 00:10:11.275 QEMU NVMe Ctrl (12341 ): 15732 I/Os completed (+3086) 00:10:11.275 00:10:12.211 QEMU NVMe Ctrl (12340 ): 19413 I/Os completed (+3244) 00:10:12.211 QEMU NVMe Ctrl (12341 ): 19016 I/Os completed (+3284) 00:10:12.211 00:10:13.147 QEMU NVMe Ctrl (12340 ): 22527 I/Os completed (+3114) 00:10:13.147 QEMU NVMe Ctrl (12341 ): 22179 I/Os completed (+3163) 00:10:13.147 00:10:14.081 QEMU NVMe Ctrl (12340 ): 25614 I/Os completed (+3087) 00:10:14.081 QEMU NVMe Ctrl (12341 ): 25246 I/Os completed (+3067) 00:10:14.081 00:10:15.014 QEMU NVMe Ctrl (12340 ): 28690 I/Os completed (+3076) 00:10:15.014 QEMU NVMe Ctrl (12341 ): 28295 I/Os completed (+3049) 00:10:15.014 00:10:16.388 QEMU NVMe Ctrl (12340 ): 32268 I/Os completed (+3578) 00:10:16.388 QEMU NVMe Ctrl (12341 ): 31882 I/Os completed (+3587) 00:10:16.388 00:10:17.322 QEMU NVMe Ctrl (12340 ): 35516 I/Os completed (+3248) 00:10:17.322 QEMU NVMe Ctrl (12341 ): 35109 I/Os completed (+3227) 00:10:17.322 00:10:18.256 QEMU NVMe Ctrl (12340 ): 38804 I/Os completed (+3288) 00:10:18.256 QEMU NVMe Ctrl (12341 ): 38445 I/Os completed (+3336) 00:10:18.256 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.514 [2024-11-20 12:42:43.871187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:18.514 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:18.514 [2024-11-20 12:42:43.872337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.872382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.872399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.872417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:18.514 [2024-11-20 12:42:43.874265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.874309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.874323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.874338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.514 [2024-11-20 12:42:43.891659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:18.514 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:18.514 [2024-11-20 12:42:43.892946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.893018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.893053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.893072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:18.514 [2024-11-20 12:42:43.894705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.894751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.894770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 [2024-11-20 12:42:43.894782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:18.514 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:18.514 EAL: Scan for (pci) bus failed. 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.514 12:42:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:18.514 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:18.772 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.772 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.772 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.772 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:18.772 Attaching to 0000:00:10.0 00:10:18.772 Attached to 0000:00:10.0 00:10:18.772 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:18.772 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.773 12:42:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:18.773 Attaching to 0000:00:11.0 00:10:18.773 Attached to 0000:00:11.0 00:10:18.773 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:18.773 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:18.773 [2024-11-20 12:42:44.136181] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:31.001 12:42:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:31.001 12:42:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:31.001 12:42:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:10:31.001 12:42:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:10:31.001 12:42:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:31.001 12:42:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:10:31.001 12:42:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:10:31.001 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 12:42:56 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:37.575 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66851 00:10:37.575 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66851) - No such process 00:10:37.575 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66851 00:10:37.575 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:37.575 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:37.575 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:37.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.575 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67402 00:10:37.576 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:37.576 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67402 00:10:37.576 12:43:02 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67402 ']' 00:10:37.576 12:43:02 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.576 12:43:02 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.576 12:43:02 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:37.576 12:43:02 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.576 12:43:02 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.576 12:43:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.576 [2024-11-20 12:43:02.219118] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:10:37.576 [2024-11-20 12:43:02.219238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67402 ] 00:10:37.576 [2024-11-20 12:43:02.380547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.576 [2024-11-20 12:43:02.475944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:37.576 12:43:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:37.576 12:43:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.135 12:43:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.135 12:43:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.135 12:43:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:44.135 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:44.135 [2024-11-20 12:43:09.165703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:44.135 [2024-11-20 12:43:09.166973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.167100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.167119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.135 [2024-11-20 12:43:09.167138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.167145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.167154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.135 [2024-11-20 12:43:09.167161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.167169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.167175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.135 [2024-11-20 12:43:09.167186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.167193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.167200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.135 [2024-11-20 12:43:09.565710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:44.135 [2024-11-20 12:43:09.567093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.567126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.567138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.135 [2024-11-20 12:43:09.567154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.567162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.567169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.135 [2024-11-20 12:43:09.567179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.567186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.567193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.135 [2024-11-20 12:43:09.567200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.135 [2024-11-20 12:43:09.567208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.135 [2024-11-20 12:43:09.567215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.394 12:43:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.394 12:43:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.394 12:43:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:44.394 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:44.653 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.653 12:43:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:56.862 12:43:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.862 12:43:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.862 12:43:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.862 [2024-11-20 12:43:21.965895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:56.862 [2024-11-20 12:43:21.967370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.862 [2024-11-20 12:43:21.967402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.862 [2024-11-20 12:43:21.967412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.862 [2024-11-20 12:43:21.967429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.862 [2024-11-20 12:43:21.967437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.862 [2024-11-20 12:43:21.967445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.862 [2024-11-20 12:43:21.967452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.862 [2024-11-20 12:43:21.967460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.862 [2024-11-20 12:43:21.967466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.862 [2024-11-20 12:43:21.967475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.862 [2024-11-20 12:43:21.967481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.862 [2024-11-20 12:43:21.967489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:56.862 12:43:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:56.863 12:43:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.863 12:43:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.863 12:43:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.863 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:56.863 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:56.863 [2024-11-20 12:43:22.365896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:56.863 [2024-11-20 12:43:22.367115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.863 [2024-11-20 12:43:22.367150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.863 [2024-11-20 12:43:22.367163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.863 [2024-11-20 12:43:22.367178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.863 [2024-11-20 12:43:22.367188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.863 [2024-11-20 12:43:22.367195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.863 [2024-11-20 12:43:22.367203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.863 [2024-11-20 12:43:22.367210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.863 [2024-11-20 12:43:22.367218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.863 [2024-11-20 12:43:22.367225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.863 [2024-11-20 12:43:22.367232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.863 [2024-11-20 12:43:22.367239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.121 12:43:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.121 12:43:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.121 12:43:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:57.121 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.379 12:43:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.574 12:43:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.574 12:43:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.574 12:43:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:09.574 [2024-11-20 12:43:34.866123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.574 [2024-11-20 12:43:34.867480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.574 [2024-11-20 12:43:34.867515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.574 [2024-11-20 12:43:34.867527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.574 [2024-11-20 12:43:34.867543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.574 [2024-11-20 12:43:34.867551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.574 [2024-11-20 12:43:34.867561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.574 [2024-11-20 12:43:34.867568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.574 [2024-11-20 12:43:34.867576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.574 [2024-11-20 12:43:34.867582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.574 [2024-11-20 12:43:34.867590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.574 [2024-11-20 12:43:34.867596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.574 [2024-11-20 12:43:34.867604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.574 12:43:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.574 12:43:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.574 12:43:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:09.574 12:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:09.832 [2024-11-20 12:43:35.266129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:09.832 [2024-11-20 12:43:35.267445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.833 [2024-11-20 12:43:35.267479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.833 [2024-11-20 12:43:35.267491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.833 [2024-11-20 12:43:35.267505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.833 [2024-11-20 12:43:35.267514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.833 [2024-11-20 12:43:35.267521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.833 [2024-11-20 12:43:35.267529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.833 [2024-11-20 12:43:35.267535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.833 [2024-11-20 12:43:35.267545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.833 [2024-11-20 12:43:35.267552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.833 [2024-11-20 12:43:35.267560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.833 [2024-11-20 12:43:35.267566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.090 12:43:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.090 12:43:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.090 12:43:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.090 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:10.091 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:10.091 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.091 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.091 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.091 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.348 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.348 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.348 12:43:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:11:22.561 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:22.561 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:22.561 12:43:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:22.562 12:43:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:22.562 12:43:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:22.562 12:43:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:22.562 12:43:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:22.562 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:22.562 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:22.562 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:22.562 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:22.562 12:43:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.159 12:43:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.159 12:43:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.159 12:43:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:29.159 12:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.159 [2024-11-20 12:43:53.820199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:29.159 [2024-11-20 12:43:53.821478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:53.821509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:53.821519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 [2024-11-20 12:43:53.821537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:53.821544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:53.821553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 [2024-11-20 12:43:53.821561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:53.821569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:53.821575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 [2024-11-20 12:43:53.821584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:53.821590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:53.821600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.159 12:43:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.159 12:43:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.159 [2024-11-20 12:43:54.320197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:29.159 [2024-11-20 12:43:54.321153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:54.321184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:54.321195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 [2024-11-20 12:43:54.321209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:54.321217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:54.321224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 [2024-11-20 12:43:54.321233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:54.321239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:54.321247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 [2024-11-20 12:43:54.321254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.159 [2024-11-20 12:43:54.321261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.159 [2024-11-20 12:43:54.321268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.159 12:43:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:29.159 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.424 12:43:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.424 12:43:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.424 12:43:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:29.424 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:29.683 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.683 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.683 12:43:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.683 12:43:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.931 12:44:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.931 12:44:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.931 12:44:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.931 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.931 12:44:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.932 12:44:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.932 12:44:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.932 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:41.932 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:41.932 [2024-11-20 12:44:07.220479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:41.932 [2024-11-20 12:44:07.221485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.932 [2024-11-20 12:44:07.221519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.932 [2024-11-20 12:44:07.221531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.932 [2024-11-20 12:44:07.221549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.932 [2024-11-20 12:44:07.221556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.932 [2024-11-20 12:44:07.221565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.932 [2024-11-20 12:44:07.221572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.932 [2024-11-20 12:44:07.221580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.932 [2024-11-20 12:44:07.221587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.932 [2024-11-20 12:44:07.221595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.932 [2024-11-20 12:44:07.221601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.932 [2024-11-20 12:44:07.221609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.222 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:42.222 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.222 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.222 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.222 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.222 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.222 12:44:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.222 12:44:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.222 12:44:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.222 [2024-11-20 12:44:07.720487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:42.222 [2024-11-20 12:44:07.721391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.222 [2024-11-20 12:44:07.721421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.222 [2024-11-20 12:44:07.721432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.222 [2024-11-20 12:44:07.721446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.222 [2024-11-20 12:44:07.721456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.222 [2024-11-20 12:44:07.721463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.222 [2024-11-20 12:44:07.721472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.222 [2024-11-20 12:44:07.721479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.223 [2024-11-20 12:44:07.721487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.223 [2024-11-20 12:44:07.721494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.223 [2024-11-20 12:44:07.721502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.223 [2024-11-20 12:44:07.721509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.223 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:42.223 12:44:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.796 12:44:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.796 12:44:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.796 12:44:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:42.796 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.058 12:44:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.285 12:44:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.285 12:44:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 12:44:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.285 12:44:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.285 12:44:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 12:44:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:55.285 12:44:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.285 [2024-11-20 12:44:20.620762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:55.285 [2024-11-20 12:44:20.621835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.285 [2024-11-20 12:44:20.621861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.285 [2024-11-20 12:44:20.621871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.285 [2024-11-20 12:44:20.621888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.285 [2024-11-20 12:44:20.621895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.285 [2024-11-20 12:44:20.621904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.285 [2024-11-20 12:44:20.621911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.285 [2024-11-20 12:44:20.621922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.285 [2024-11-20 12:44:20.621929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.285 [2024-11-20 12:44:20.621937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.285 [2024-11-20 12:44:20.621944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.285 [2024-11-20 12:44:20.621952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.543 [2024-11-20 12:44:21.020775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:55.543 [2024-11-20 12:44:21.021787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.543 [2024-11-20 12:44:21.021817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.543 [2024-11-20 12:44:21.021828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.543 [2024-11-20 12:44:21.021842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.543 [2024-11-20 12:44:21.021851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.543 [2024-11-20 12:44:21.021857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.543 [2024-11-20 12:44:21.021866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.543 [2024-11-20 12:44:21.021873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.543 [2024-11-20 12:44:21.021881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.543 [2024-11-20 12:44:21.021888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.543 [2024-11-20 12:44:21.021898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.543 [2024-11-20 12:44:21.021905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.801 12:44:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.801 12:44:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.801 12:44:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.801 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.059 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.059 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.059 12:44:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.70 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.70 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.70 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.70 2 00:12:08.274 remove_attach_helper took 45.70s to complete (handling 2 nvme drive(s)) 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:08.274 12:44:33 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67402 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67402 ']' 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67402 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67402 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67402' 00:12:08.274 killing process with pid 67402 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67402 00:12:08.274 12:44:33 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67402 00:12:09.216 12:44:34 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:09.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:10.048 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:10.048 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:10.048 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.048 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:10.310 00:12:10.310 real 2m29.832s 00:12:10.310 user 1m51.713s 00:12:10.310 sys 0m16.606s 00:12:10.310 12:44:35 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.310 ************************************ 00:12:10.310 12:44:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.310 END TEST sw_hotplug 00:12:10.310 ************************************ 00:12:10.310 12:44:35 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:10.310 12:44:35 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:10.310 12:44:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.310 12:44:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.310 12:44:35 -- common/autotest_common.sh@10 -- # set +x 00:12:10.310 ************************************ 00:12:10.310 START TEST nvme_xnvme 00:12:10.310 ************************************ 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:10.310 * Looking for test storage... 00:12:10.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.310 12:44:35 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.310 --rc genhtml_branch_coverage=1 00:12:10.310 --rc genhtml_function_coverage=1 00:12:10.310 --rc genhtml_legend=1 00:12:10.310 --rc geninfo_all_blocks=1 00:12:10.310 --rc geninfo_unexecuted_blocks=1 00:12:10.310 00:12:10.310 ' 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.310 --rc genhtml_branch_coverage=1 00:12:10.310 --rc genhtml_function_coverage=1 00:12:10.310 --rc genhtml_legend=1 00:12:10.310 --rc geninfo_all_blocks=1 00:12:10.310 --rc geninfo_unexecuted_blocks=1 00:12:10.310 00:12:10.310 ' 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.310 --rc genhtml_branch_coverage=1 00:12:10.310 --rc genhtml_function_coverage=1 00:12:10.310 --rc genhtml_legend=1 00:12:10.310 --rc geninfo_all_blocks=1 00:12:10.310 --rc geninfo_unexecuted_blocks=1 00:12:10.310 00:12:10.310 ' 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.310 --rc genhtml_branch_coverage=1 00:12:10.310 --rc genhtml_function_coverage=1 00:12:10.310 --rc genhtml_legend=1 00:12:10.310 --rc geninfo_all_blocks=1 00:12:10.310 --rc geninfo_unexecuted_blocks=1 00:12:10.310 00:12:10.310 ' 00:12:10.310 12:44:35 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:10.310 12:44:35 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:10.310 12:44:35 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:10.310 12:44:35 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:10.311 12:44:35 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:10.311 12:44:35 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:10.311 12:44:35 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:10.575 #define SPDK_CONFIG_H 00:12:10.575 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:10.575 #define SPDK_CONFIG_APPS 1 00:12:10.575 #define SPDK_CONFIG_ARCH native 00:12:10.575 #define SPDK_CONFIG_ASAN 1 00:12:10.575 #undef SPDK_CONFIG_AVAHI 00:12:10.575 #undef SPDK_CONFIG_CET 00:12:10.575 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:10.575 #define SPDK_CONFIG_COVERAGE 1 00:12:10.575 #define SPDK_CONFIG_CROSS_PREFIX 00:12:10.575 #undef SPDK_CONFIG_CRYPTO 00:12:10.575 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:10.575 #undef SPDK_CONFIG_CUSTOMOCF 00:12:10.575 #undef SPDK_CONFIG_DAOS 00:12:10.575 #define SPDK_CONFIG_DAOS_DIR 00:12:10.575 #define SPDK_CONFIG_DEBUG 1 00:12:10.575 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:10.575 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:10.575 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:10.575 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:10.575 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:10.575 #undef SPDK_CONFIG_DPDK_UADK 00:12:10.575 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:10.575 #define SPDK_CONFIG_EXAMPLES 1 00:12:10.575 #undef SPDK_CONFIG_FC 00:12:10.575 #define SPDK_CONFIG_FC_PATH 00:12:10.575 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:10.575 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:10.575 #define SPDK_CONFIG_FSDEV 1 00:12:10.575 #undef SPDK_CONFIG_FUSE 00:12:10.575 #undef SPDK_CONFIG_FUZZER 00:12:10.575 #define SPDK_CONFIG_FUZZER_LIB 00:12:10.575 #undef SPDK_CONFIG_GOLANG 00:12:10.575 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:10.575 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:10.575 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:10.575 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:10.575 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:10.575 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:10.575 #undef SPDK_CONFIG_HAVE_LZ4 00:12:10.575 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:10.575 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:10.575 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:10.575 #define SPDK_CONFIG_IDXD 1 00:12:10.575 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:10.575 #undef SPDK_CONFIG_IPSEC_MB 00:12:10.575 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:10.575 #define SPDK_CONFIG_ISAL 1 00:12:10.575 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:10.575 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:10.575 #define SPDK_CONFIG_LIBDIR 00:12:10.575 #undef SPDK_CONFIG_LTO 00:12:10.575 #define SPDK_CONFIG_MAX_LCORES 128 00:12:10.575 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:10.575 #define SPDK_CONFIG_NVME_CUSE 1 00:12:10.575 #undef SPDK_CONFIG_OCF 00:12:10.575 #define SPDK_CONFIG_OCF_PATH 00:12:10.575 #define SPDK_CONFIG_OPENSSL_PATH 00:12:10.575 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:10.575 #define SPDK_CONFIG_PGO_DIR 00:12:10.575 #undef SPDK_CONFIG_PGO_USE 00:12:10.575 #define SPDK_CONFIG_PREFIX /usr/local 00:12:10.575 #undef SPDK_CONFIG_RAID5F 00:12:10.575 #undef SPDK_CONFIG_RBD 00:12:10.575 #define SPDK_CONFIG_RDMA 1 00:12:10.575 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:10.575 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:10.575 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:10.575 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:10.575 #define SPDK_CONFIG_SHARED 1 00:12:10.575 #undef SPDK_CONFIG_SMA 00:12:10.575 #define SPDK_CONFIG_TESTS 1 00:12:10.575 #undef SPDK_CONFIG_TSAN 00:12:10.575 #define SPDK_CONFIG_UBLK 1 00:12:10.575 #define SPDK_CONFIG_UBSAN 1 00:12:10.575 #undef SPDK_CONFIG_UNIT_TESTS 00:12:10.575 #undef SPDK_CONFIG_URING 00:12:10.575 #define SPDK_CONFIG_URING_PATH 00:12:10.575 #undef SPDK_CONFIG_URING_ZNS 00:12:10.575 #undef SPDK_CONFIG_USDT 00:12:10.575 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:10.575 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:10.575 #undef SPDK_CONFIG_VFIO_USER 00:12:10.575 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:10.575 #define SPDK_CONFIG_VHOST 1 00:12:10.575 #define SPDK_CONFIG_VIRTIO 1 00:12:10.575 #undef SPDK_CONFIG_VTUNE 00:12:10.575 #define SPDK_CONFIG_VTUNE_DIR 00:12:10.575 #define SPDK_CONFIG_WERROR 1 00:12:10.575 #define SPDK_CONFIG_WPDK_DIR 00:12:10.575 #define SPDK_CONFIG_XNVME 1 00:12:10.575 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:10.575 12:44:35 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:10.575 12:44:35 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.575 12:44:35 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.575 12:44:35 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.575 12:44:35 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.575 12:44:35 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.575 12:44:35 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.576 12:44:35 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.576 12:44:35 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.576 12:44:35 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:10.576 12:44:35 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:10.576 12:44:35 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:10.576 12:44:35 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68760 ]] 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68760 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.hegTGg 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.hegTGg/tests/xnvme /tmp/spdk.hegTGg 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975142400 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593235456 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:10.577 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975142400 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593235456 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96941383680 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2761396224 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:10.578 * Looking for test storage... 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975142400 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.578 12:44:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:10.578 12:44:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:10.578 12:44:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.578 12:44:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:10.578 12:44:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.578 12:44:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.578 12:44:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.578 12:44:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:10.578 12:44:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.578 12:44:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.578 --rc genhtml_branch_coverage=1 00:12:10.578 --rc genhtml_function_coverage=1 00:12:10.578 --rc genhtml_legend=1 00:12:10.578 --rc geninfo_all_blocks=1 00:12:10.578 --rc geninfo_unexecuted_blocks=1 00:12:10.578 00:12:10.578 ' 00:12:10.578 12:44:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.578 --rc genhtml_branch_coverage=1 00:12:10.578 --rc genhtml_function_coverage=1 00:12:10.578 --rc genhtml_legend=1 00:12:10.578 --rc geninfo_all_blocks=1 00:12:10.578 --rc geninfo_unexecuted_blocks=1 00:12:10.578 00:12:10.578 ' 00:12:10.578 12:44:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.579 --rc genhtml_branch_coverage=1 00:12:10.579 --rc genhtml_function_coverage=1 00:12:10.579 --rc genhtml_legend=1 00:12:10.579 --rc geninfo_all_blocks=1 00:12:10.579 --rc geninfo_unexecuted_blocks=1 00:12:10.579 00:12:10.579 ' 00:12:10.579 12:44:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.579 --rc genhtml_branch_coverage=1 00:12:10.579 --rc genhtml_function_coverage=1 00:12:10.579 --rc genhtml_legend=1 00:12:10.579 --rc geninfo_all_blocks=1 00:12:10.579 --rc geninfo_unexecuted_blocks=1 00:12:10.579 00:12:10.579 ' 00:12:10.579 12:44:36 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.579 12:44:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.579 12:44:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.579 12:44:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.579 12:44:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.579 12:44:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.579 12:44:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.579 12:44:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.579 12:44:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:10.579 12:44:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:10.579 12:44:36 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:10.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.101 Waiting for block devices as requested 00:12:11.101 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.101 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.362 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.362 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.651 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:16.651 12:44:41 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:16.651 12:44:42 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:16.651 12:44:42 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:16.912 12:44:42 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:16.912 12:44:42 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:16.912 No valid GPT data, bailing 00:12:16.912 12:44:42 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:16.912 12:44:42 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:16.912 12:44:42 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:16.912 12:44:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:16.912 12:44:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.912 12:44:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.912 12:44:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.912 ************************************ 00:12:16.912 START TEST xnvme_rpc 00:12:16.912 ************************************ 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69141 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69141 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69141 ']' 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.912 12:44:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.174 [2024-11-20 12:44:42.474719] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:12:17.174 [2024-11-20 12:44:42.475301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69141 ] 00:12:17.174 [2024-11-20 12:44:42.636883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.435 [2024-11-20 12:44:42.767418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.033 xnvme_bdev 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.033 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69141 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69141 ']' 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69141 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69141 00:12:18.295 killing process with pid 69141 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69141' 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69141 00:12:18.295 12:44:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69141 00:12:19.684 00:12:19.684 real 0m2.798s 00:12:19.684 user 0m2.768s 00:12:19.684 sys 0m0.441s 00:12:19.684 12:44:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.684 12:44:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.684 ************************************ 00:12:19.684 END TEST xnvme_rpc 00:12:19.684 ************************************ 00:12:19.947 12:44:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:19.947 12:44:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.947 12:44:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.947 12:44:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.947 ************************************ 00:12:19.947 START TEST xnvme_bdevperf 00:12:19.947 ************************************ 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:19.947 12:44:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:19.947 { 00:12:19.947 "subsystems": [ 00:12:19.947 { 00:12:19.947 "subsystem": "bdev", 00:12:19.947 "config": [ 00:12:19.947 { 00:12:19.947 "params": { 00:12:19.947 "io_mechanism": "libaio", 00:12:19.947 "conserve_cpu": false, 00:12:19.947 "filename": "/dev/nvme0n1", 00:12:19.947 "name": "xnvme_bdev" 00:12:19.947 }, 00:12:19.947 "method": "bdev_xnvme_create" 00:12:19.947 }, 00:12:19.947 { 00:12:19.947 "method": "bdev_wait_for_examine" 00:12:19.947 } 00:12:19.947 ] 00:12:19.947 } 00:12:19.947 ] 00:12:19.947 } 00:12:19.947 [2024-11-20 12:44:45.307839] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:12:19.947 [2024-11-20 12:44:45.307976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69215 ] 00:12:20.209 [2024-11-20 12:44:45.471017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.209 [2024-11-20 12:44:45.599588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.470 Running I/O for 5 seconds... 00:12:22.799 35116.00 IOPS, 137.17 MiB/s [2024-11-20T12:44:49.257Z] 34626.00 IOPS, 135.26 MiB/s [2024-11-20T12:44:50.202Z] 33224.00 IOPS, 129.78 MiB/s [2024-11-20T12:44:51.146Z] 33319.00 IOPS, 130.15 MiB/s [2024-11-20T12:44:51.147Z] 32834.80 IOPS, 128.26 MiB/s 00:12:25.628 Latency(us) 00:12:25.628 [2024-11-20T12:44:51.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.628 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:25.628 xnvme_bdev : 5.01 32809.04 128.16 0.00 0.00 1946.20 401.72 7158.55 00:12:25.628 [2024-11-20T12:44:51.147Z] =================================================================================================================== 00:12:25.628 [2024-11-20T12:44:51.147Z] Total : 32809.04 128.16 0.00 0.00 1946.20 401.72 7158.55 00:12:26.201 12:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.202 12:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:26.202 12:44:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:26.202 12:44:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:26.202 12:44:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 { 00:12:26.463 "subsystems": [ 00:12:26.463 { 00:12:26.463 "subsystem": "bdev", 00:12:26.463 "config": [ 00:12:26.463 { 00:12:26.463 "params": { 00:12:26.463 "io_mechanism": "libaio", 00:12:26.463 "conserve_cpu": false, 00:12:26.463 "filename": "/dev/nvme0n1", 00:12:26.463 "name": "xnvme_bdev" 00:12:26.463 }, 00:12:26.463 "method": "bdev_xnvme_create" 00:12:26.463 }, 00:12:26.463 { 00:12:26.463 "method": "bdev_wait_for_examine" 00:12:26.463 } 00:12:26.463 ] 00:12:26.463 } 00:12:26.463 ] 00:12:26.463 } 00:12:26.463 [2024-11-20 12:44:51.785121] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:12:26.463 [2024-11-20 12:44:51.785472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69290 ] 00:12:26.463 [2024-11-20 12:44:51.949946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.725 [2024-11-20 12:44:52.085256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.985 Running I/O for 5 seconds... 00:12:29.318 33589.00 IOPS, 131.21 MiB/s [2024-11-20T12:44:55.408Z] 34599.50 IOPS, 135.15 MiB/s [2024-11-20T12:44:56.791Z] 34075.00 IOPS, 133.11 MiB/s [2024-11-20T12:44:57.733Z] 34107.75 IOPS, 133.23 MiB/s [2024-11-20T12:44:57.733Z] 34018.60 IOPS, 132.89 MiB/s 00:12:32.214 Latency(us) 00:12:32.214 [2024-11-20T12:44:57.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.214 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:32.214 xnvme_bdev : 5.01 33978.42 132.73 0.00 0.00 1879.05 428.50 7965.14 00:12:32.214 [2024-11-20T12:44:57.733Z] =================================================================================================================== 00:12:32.214 [2024-11-20T12:44:57.733Z] Total : 33978.42 132.73 0.00 0.00 1879.05 428.50 7965.14 00:12:32.785 ************************************ 00:12:32.785 END TEST xnvme_bdevperf 00:12:32.785 ************************************ 00:12:32.785 00:12:32.785 real 0m12.959s 00:12:32.785 user 0m5.161s 00:12:32.785 sys 0m6.162s 00:12:32.785 12:44:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.785 12:44:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:32.785 12:44:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:32.785 12:44:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:32.785 12:44:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.785 12:44:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.785 ************************************ 00:12:32.785 START TEST xnvme_fio_plugin 00:12:32.785 ************************************ 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:32.785 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:32.786 12:44:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:33.047 { 00:12:33.047 "subsystems": [ 00:12:33.048 { 00:12:33.048 "subsystem": "bdev", 00:12:33.048 "config": [ 00:12:33.048 { 00:12:33.048 "params": { 00:12:33.048 "io_mechanism": "libaio", 00:12:33.048 "conserve_cpu": false, 00:12:33.048 "filename": "/dev/nvme0n1", 00:12:33.048 "name": "xnvme_bdev" 00:12:33.048 }, 00:12:33.048 "method": "bdev_xnvme_create" 00:12:33.048 }, 00:12:33.048 { 00:12:33.048 "method": "bdev_wait_for_examine" 00:12:33.048 } 00:12:33.048 ] 00:12:33.048 } 00:12:33.048 ] 00:12:33.048 } 00:12:33.048 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:33.048 fio-3.35 00:12:33.048 Starting 1 thread 00:12:39.641 00:12:39.641 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69409: Wed Nov 20 12:45:04 2024 00:12:39.641 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(644MiB/5002msec) 00:12:39.641 slat (usec): min=4, max=2221, avg=17.77, stdev=90.85 00:12:39.641 clat (usec): min=106, max=4771, avg=1438.04, stdev=484.48 00:12:39.641 lat (usec): min=181, max=4840, avg=1455.81, stdev=474.09 00:12:39.641 clat percentiles (usec): 00:12:39.641 | 1.00th=[ 322], 5.00th=[ 644], 10.00th=[ 824], 20.00th=[ 1045], 00:12:39.641 | 30.00th=[ 1205], 40.00th=[ 1336], 50.00th=[ 1450], 60.00th=[ 1565], 00:12:39.641 | 70.00th=[ 1663], 80.00th=[ 1795], 90.00th=[ 1991], 95.00th=[ 2212], 00:12:39.641 | 99.00th=[ 2802], 99.50th=[ 3097], 99.90th=[ 3818], 99.95th=[ 4015], 00:12:39.641 | 99.99th=[ 4490] 00:12:39.641 bw ( KiB/s): min=125928, max=144160, per=99.77%, avg=131579.22, stdev=5549.48, samples=9 00:12:39.641 iops : min=31482, max=36040, avg=32894.67, stdev=1387.44, samples=9 00:12:39.641 lat (usec) : 250=0.44%, 500=2.19%, 750=4.92%, 1000=9.78% 00:12:39.641 lat (msec) : 2=72.89%, 4=9.72%, 10=0.05% 00:12:39.641 cpu : usr=54.51%, sys=38.49%, ctx=13, majf=0, minf=764 00:12:39.641 IO depths : 1=0.8%, 2=1.7%, 4=3.8%, 8=8.9%, 16=22.4%, 32=60.3%, >=64=2.1% 00:12:39.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.641 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:39.641 issued rwts: total=164925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.641 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:39.641 00:12:39.641 Run status group 0 (all jobs): 00:12:39.641 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=644MiB (676MB), run=5002-5002msec 00:12:39.641 ----------------------------------------------------- 00:12:39.641 Suppressions used: 00:12:39.641 count bytes template 00:12:39.641 1 11 /usr/src/fio/parse.c 00:12:39.641 1 8 libtcmalloc_minimal.so 00:12:39.641 1 904 libcrypto.so 00:12:39.641 ----------------------------------------------------- 00:12:39.641 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.904 12:45:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.904 { 00:12:39.904 "subsystems": [ 00:12:39.904 { 00:12:39.904 "subsystem": "bdev", 00:12:39.904 "config": [ 00:12:39.904 { 00:12:39.904 "params": { 00:12:39.904 "io_mechanism": "libaio", 00:12:39.904 "conserve_cpu": false, 00:12:39.904 "filename": "/dev/nvme0n1", 00:12:39.904 "name": "xnvme_bdev" 00:12:39.904 }, 00:12:39.904 "method": "bdev_xnvme_create" 00:12:39.904 }, 00:12:39.904 { 00:12:39.904 "method": "bdev_wait_for_examine" 00:12:39.904 } 00:12:39.904 ] 00:12:39.904 } 00:12:39.904 ] 00:12:39.904 } 00:12:39.904 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:39.904 fio-3.35 00:12:39.904 Starting 1 thread 00:12:46.499 00:12:46.499 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69501: Wed Nov 20 12:45:11 2024 00:12:46.499 write: IOPS=37.1k, BW=145MiB/s (152MB/s)(725MiB/5001msec); 0 zone resets 00:12:46.499 slat (usec): min=4, max=1732, avg=19.74, stdev=80.29 00:12:46.499 clat (usec): min=106, max=5338, avg=1190.10, stdev=508.52 00:12:46.499 lat (usec): min=170, max=5561, avg=1209.83, stdev=502.77 00:12:46.499 clat percentiles (usec): 00:12:46.499 | 1.00th=[ 273], 5.00th=[ 461], 10.00th=[ 603], 20.00th=[ 783], 00:12:46.499 | 30.00th=[ 914], 40.00th=[ 1020], 50.00th=[ 1139], 60.00th=[ 1254], 00:12:46.499 | 70.00th=[ 1385], 80.00th=[ 1565], 90.00th=[ 1811], 95.00th=[ 2073], 00:12:46.499 | 99.00th=[ 2737], 99.50th=[ 3130], 99.90th=[ 3982], 99.95th=[ 4424], 00:12:46.499 | 99.99th=[ 4948] 00:12:46.500 bw ( KiB/s): min=115824, max=157208, per=98.76%, avg=146672.89, stdev=12339.67, samples=9 00:12:46.500 iops : min=28956, max=39302, avg=36668.22, stdev=3084.92, samples=9 00:12:46.500 lat (usec) : 250=0.76%, 500=5.48%, 750=11.56%, 1000=20.03% 00:12:46.500 lat (msec) : 2=56.14%, 4=5.92%, 10=0.10% 00:12:46.500 cpu : usr=38.86%, sys=49.94%, ctx=40, majf=0, minf=764 00:12:46.500 IO depths : 1=0.4%, 2=1.1%, 4=2.8%, 8=8.2%, 16=23.5%, 32=61.9%, >=64=2.1% 00:12:46.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.500 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:46.500 issued rwts: total=0,185686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.500 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:46.500 00:12:46.500 Run status group 0 (all jobs): 00:12:46.500 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=725MiB (761MB), run=5001-5001msec 00:12:46.760 ----------------------------------------------------- 00:12:46.760 Suppressions used: 00:12:46.760 count bytes template 00:12:46.760 1 11 /usr/src/fio/parse.c 00:12:46.760 1 8 libtcmalloc_minimal.so 00:12:46.760 1 904 libcrypto.so 00:12:46.760 ----------------------------------------------------- 00:12:46.760 00:12:46.760 00:12:46.760 real 0m13.837s 00:12:46.760 user 0m7.470s 00:12:46.760 sys 0m5.080s 00:12:46.760 12:45:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.760 ************************************ 00:12:46.760 END TEST xnvme_fio_plugin 00:12:46.760 ************************************ 00:12:46.760 12:45:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:46.760 12:45:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:46.760 12:45:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:46.760 12:45:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:46.760 12:45:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:46.760 12:45:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.760 12:45:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.760 12:45:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.760 ************************************ 00:12:46.760 START TEST xnvme_rpc 00:12:46.760 ************************************ 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69587 00:12:46.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69587 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69587 ']' 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.760 12:45:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.760 [2024-11-20 12:45:12.258108] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:12:46.760 [2024-11-20 12:45:12.258264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69587 ] 00:12:47.021 [2024-11-20 12:45:12.424377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.281 [2024-11-20 12:45:12.557625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.854 xnvme_bdev 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.854 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69587 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69587 ']' 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69587 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69587 00:12:48.116 killing process with pid 69587 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69587' 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69587 00:12:48.116 12:45:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69587 00:12:50.031 00:12:50.031 real 0m2.922s 00:12:50.031 user 0m2.922s 00:12:50.031 sys 0m0.477s 00:12:50.031 12:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.031 ************************************ 00:12:50.031 END TEST xnvme_rpc 00:12:50.031 ************************************ 00:12:50.031 12:45:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.031 12:45:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:50.031 12:45:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.031 12:45:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.031 12:45:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.031 ************************************ 00:12:50.031 START TEST xnvme_bdevperf 00:12:50.031 ************************************ 00:12:50.031 12:45:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:50.031 12:45:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:50.032 12:45:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:50.032 12:45:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:50.032 12:45:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:50.032 12:45:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:50.032 12:45:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:50.032 12:45:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.032 { 00:12:50.032 "subsystems": [ 00:12:50.032 { 00:12:50.032 "subsystem": "bdev", 00:12:50.032 "config": [ 00:12:50.032 { 00:12:50.032 "params": { 00:12:50.032 "io_mechanism": "libaio", 00:12:50.032 "conserve_cpu": true, 00:12:50.032 "filename": "/dev/nvme0n1", 00:12:50.032 "name": "xnvme_bdev" 00:12:50.032 }, 00:12:50.032 "method": "bdev_xnvme_create" 00:12:50.032 }, 00:12:50.032 { 00:12:50.032 "method": "bdev_wait_for_examine" 00:12:50.032 } 00:12:50.032 ] 00:12:50.032 } 00:12:50.032 ] 00:12:50.032 } 00:12:50.032 [2024-11-20 12:45:15.236733] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:12:50.032 [2024-11-20 12:45:15.236903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69660 ] 00:12:50.032 [2024-11-20 12:45:15.402394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.032 [2024-11-20 12:45:15.535731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.670 Running I/O for 5 seconds... 00:12:52.557 34647.00 IOPS, 135.34 MiB/s [2024-11-20T12:45:19.020Z] 33493.00 IOPS, 130.83 MiB/s [2024-11-20T12:45:19.964Z] 33063.00 IOPS, 129.15 MiB/s [2024-11-20T12:45:20.909Z] 32898.50 IOPS, 128.51 MiB/s 00:12:55.390 Latency(us) 00:12:55.390 [2024-11-20T12:45:20.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.390 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:55.390 xnvme_bdev : 5.00 32647.27 127.53 0.00 0.00 1955.75 463.16 7813.91 00:12:55.390 [2024-11-20T12:45:20.909Z] =================================================================================================================== 00:12:55.390 [2024-11-20T12:45:20.909Z] Total : 32647.27 127.53 0.00 0.00 1955.75 463.16 7813.91 00:12:56.335 12:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:56.335 12:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:56.335 12:45:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:56.335 12:45:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:56.335 12:45:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:56.335 { 00:12:56.335 "subsystems": [ 00:12:56.335 { 00:12:56.335 "subsystem": "bdev", 00:12:56.335 "config": [ 00:12:56.335 { 00:12:56.335 "params": { 00:12:56.335 "io_mechanism": "libaio", 00:12:56.335 "conserve_cpu": true, 00:12:56.335 "filename": "/dev/nvme0n1", 00:12:56.335 "name": "xnvme_bdev" 00:12:56.335 }, 00:12:56.335 "method": "bdev_xnvme_create" 00:12:56.335 }, 00:12:56.335 { 00:12:56.335 "method": "bdev_wait_for_examine" 00:12:56.335 } 00:12:56.335 ] 00:12:56.335 } 00:12:56.335 ] 00:12:56.335 } 00:12:56.335 [2024-11-20 12:45:21.721217] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:12:56.335 [2024-11-20 12:45:21.721366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69731 ] 00:12:56.597 [2024-11-20 12:45:21.885838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.597 [2024-11-20 12:45:22.009690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.859 Running I/O for 5 seconds... 00:12:59.194 36443.00 IOPS, 142.36 MiB/s [2024-11-20T12:45:25.658Z] 36343.50 IOPS, 141.97 MiB/s [2024-11-20T12:45:26.610Z] 36486.67 IOPS, 142.53 MiB/s [2024-11-20T12:45:27.564Z] 36300.25 IOPS, 141.80 MiB/s 00:13:02.045 Latency(us) 00:13:02.045 [2024-11-20T12:45:27.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.045 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:02.045 xnvme_bdev : 5.00 36473.00 142.47 0.00 0.00 1750.45 278.84 14317.10 00:13:02.045 [2024-11-20T12:45:27.564Z] =================================================================================================================== 00:13:02.045 [2024-11-20T12:45:27.565Z] Total : 36473.00 142.47 0.00 0.00 1750.45 278.84 14317.10 00:13:02.619 00:13:02.619 real 0m12.962s 00:13:02.619 user 0m5.409s 00:13:02.619 sys 0m6.019s 00:13:02.619 ************************************ 00:13:02.619 END TEST xnvme_bdevperf 00:13:02.619 ************************************ 00:13:02.619 12:45:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.619 12:45:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 12:45:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:02.881 12:45:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.881 12:45:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.881 12:45:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 ************************************ 00:13:02.881 START TEST xnvme_fio_plugin 00:13:02.881 ************************************ 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:02.881 12:45:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.881 { 00:13:02.881 "subsystems": [ 00:13:02.881 { 00:13:02.881 "subsystem": "bdev", 00:13:02.881 "config": [ 00:13:02.881 { 00:13:02.881 "params": { 00:13:02.881 "io_mechanism": "libaio", 00:13:02.881 "conserve_cpu": true, 00:13:02.881 "filename": "/dev/nvme0n1", 00:13:02.881 "name": "xnvme_bdev" 00:13:02.881 }, 00:13:02.881 "method": "bdev_xnvme_create" 00:13:02.881 }, 00:13:02.881 { 00:13:02.881 "method": "bdev_wait_for_examine" 00:13:02.881 } 00:13:02.881 ] 00:13:02.881 } 00:13:02.881 ] 00:13:02.881 } 00:13:02.881 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:02.881 fio-3.35 00:13:02.881 Starting 1 thread 00:13:09.471 00:13:09.471 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69856: Wed Nov 20 12:45:34 2024 00:13:09.471 read: IOPS=35.5k, BW=139MiB/s (145MB/s)(693MiB/5001msec) 00:13:09.471 slat (usec): min=4, max=2193, avg=19.61, stdev=90.42 00:13:09.471 clat (usec): min=16, max=4624, avg=1276.81, stdev=498.76 00:13:09.471 lat (usec): min=162, max=4708, avg=1296.42, stdev=490.41 00:13:09.471 clat percentiles (usec): 00:13:09.471 | 1.00th=[ 277], 5.00th=[ 502], 10.00th=[ 668], 20.00th=[ 873], 00:13:09.471 | 30.00th=[ 1012], 40.00th=[ 1123], 50.00th=[ 1254], 60.00th=[ 1369], 00:13:09.471 | 70.00th=[ 1500], 80.00th=[ 1663], 90.00th=[ 1876], 95.00th=[ 2114], 00:13:09.471 | 99.00th=[ 2737], 99.50th=[ 3097], 99.90th=[ 3687], 99.95th=[ 3851], 00:13:09.471 | 99.99th=[ 4228] 00:13:09.471 bw ( KiB/s): min=134376, max=158040, per=100.00%, avg=142637.33, stdev=7313.70, samples=9 00:13:09.471 iops : min=33594, max=39510, avg=35659.33, stdev=1828.42, samples=9 00:13:09.471 lat (usec) : 20=0.01%, 250=0.72%, 500=4.24%, 750=8.47%, 1000=15.67% 00:13:09.471 lat (msec) : 2=64.06%, 4=6.81%, 10=0.03% 00:13:09.471 cpu : usr=44.82%, sys=47.24%, ctx=16, majf=0, minf=764 00:13:09.471 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.5%, 16=23.1%, 32=61.3%, >=64=2.1% 00:13:09.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.471 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:09.471 issued rwts: total=177490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.471 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:09.471 00:13:09.471 Run status group 0 (all jobs): 00:13:09.471 READ: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=693MiB (727MB), run=5001-5001msec 00:13:09.731 ----------------------------------------------------- 00:13:09.731 Suppressions used: 00:13:09.731 count bytes template 00:13:09.731 1 11 /usr/src/fio/parse.c 00:13:09.731 1 8 libtcmalloc_minimal.so 00:13:09.731 1 904 libcrypto.so 00:13:09.731 ----------------------------------------------------- 00:13:09.731 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.731 12:45:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.731 { 00:13:09.731 "subsystems": [ 00:13:09.731 { 00:13:09.731 "subsystem": "bdev", 00:13:09.731 "config": [ 00:13:09.731 { 00:13:09.731 "params": { 00:13:09.731 "io_mechanism": "libaio", 00:13:09.731 "conserve_cpu": true, 00:13:09.731 "filename": "/dev/nvme0n1", 00:13:09.731 "name": "xnvme_bdev" 00:13:09.731 }, 00:13:09.731 "method": "bdev_xnvme_create" 00:13:09.731 }, 00:13:09.731 { 00:13:09.731 "method": "bdev_wait_for_examine" 00:13:09.731 } 00:13:09.731 ] 00:13:09.731 } 00:13:09.731 ] 00:13:09.731 } 00:13:09.993 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:09.993 fio-3.35 00:13:09.993 Starting 1 thread 00:13:16.581 00:13:16.581 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69942: Wed Nov 20 12:45:41 2024 00:13:16.581 write: IOPS=32.6k, BW=127MiB/s (133MB/s)(636MiB/5001msec); 0 zone resets 00:13:16.581 slat (usec): min=4, max=1769, avg=21.06, stdev=81.29 00:13:16.581 clat (usec): min=84, max=256417, avg=1390.88, stdev=6374.54 00:13:16.581 lat (usec): min=89, max=256422, avg=1411.94, stdev=6373.80 00:13:16.581 clat percentiles (usec): 00:13:16.581 | 1.00th=[ 245], 5.00th=[ 400], 10.00th=[ 537], 20.00th=[ 725], 00:13:16.581 | 30.00th=[ 873], 40.00th=[ 996], 50.00th=[ 1123], 60.00th=[ 1254], 00:13:16.581 | 70.00th=[ 1401], 80.00th=[ 1565], 90.00th=[ 1844], 95.00th=[ 2114], 00:13:16.581 | 99.00th=[ 2769], 99.50th=[ 3097], 99.90th=[121111], 99.95th=[154141], 00:13:16.581 | 99.99th=[254804] 00:13:16.581 bw ( KiB/s): min=74824, max=163568, per=100.00%, avg=134537.78, stdev=27007.66, samples=9 00:13:16.581 iops : min=18706, max=40892, avg=33634.44, stdev=6751.91, samples=9 00:13:16.581 lat (usec) : 100=0.01%, 250=1.09%, 500=7.23%, 750=13.36%, 1000=18.65% 00:13:16.581 lat (msec) : 2=53.11%, 4=6.39%, 10=0.02%, 50=0.03%, 100=0.01% 00:13:16.581 lat (msec) : 250=0.09%, 500=0.03% 00:13:16.581 cpu : usr=43.00%, sys=48.38%, ctx=11, majf=0, minf=764 00:13:16.581 IO depths : 1=0.4%, 2=1.0%, 4=2.9%, 8=8.7%, 16=23.8%, 32=61.2%, >=64=2.1% 00:13:16.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.581 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:16.581 issued rwts: total=0,162850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:16.581 00:13:16.581 Run status group 0 (all jobs): 00:13:16.581 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=636MiB (667MB), run=5001-5001msec 00:13:16.581 ----------------------------------------------------- 00:13:16.581 Suppressions used: 00:13:16.581 count bytes template 00:13:16.581 1 11 /usr/src/fio/parse.c 00:13:16.581 1 8 libtcmalloc_minimal.so 00:13:16.581 1 904 libcrypto.so 00:13:16.581 ----------------------------------------------------- 00:13:16.581 00:13:16.581 00:13:16.581 real 0m13.850s 00:13:16.581 user 0m7.194s 00:13:16.581 sys 0m5.411s 00:13:16.581 12:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.581 ************************************ 00:13:16.581 END TEST xnvme_fio_plugin 00:13:16.581 ************************************ 00:13:16.581 12:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:16.843 12:45:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:16.843 12:45:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.843 12:45:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.843 12:45:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.843 ************************************ 00:13:16.843 START TEST xnvme_rpc 00:13:16.843 ************************************ 00:13:16.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70028 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70028 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70028 ']' 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:16.843 12:45:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.843 [2024-11-20 12:45:42.208343] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:13:16.843 [2024-11-20 12:45:42.208496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70028 ] 00:13:17.105 [2024-11-20 12:45:42.365272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.105 [2024-11-20 12:45:42.494551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.678 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.678 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:17.678 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:17.678 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.678 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.940 xnvme_bdev 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70028 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70028 ']' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70028 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70028 00:13:17.940 killing process with pid 70028 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70028' 00:13:17.940 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70028 00:13:17.941 12:45:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70028 00:13:19.860 00:13:19.860 real 0m2.927s 00:13:19.860 user 0m2.957s 00:13:19.860 sys 0m0.467s 00:13:19.860 12:45:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.860 ************************************ 00:13:19.860 END TEST xnvme_rpc 00:13:19.860 ************************************ 00:13:19.860 12:45:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.860 12:45:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:19.860 12:45:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:19.860 12:45:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.860 12:45:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.860 ************************************ 00:13:19.860 START TEST xnvme_bdevperf 00:13:19.860 ************************************ 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:19.860 12:45:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:19.860 { 00:13:19.860 "subsystems": [ 00:13:19.860 { 00:13:19.860 "subsystem": "bdev", 00:13:19.860 "config": [ 00:13:19.860 { 00:13:19.860 "params": { 00:13:19.860 "io_mechanism": "io_uring", 00:13:19.860 "conserve_cpu": false, 00:13:19.860 "filename": "/dev/nvme0n1", 00:13:19.860 "name": "xnvme_bdev" 00:13:19.860 }, 00:13:19.860 "method": "bdev_xnvme_create" 00:13:19.860 }, 00:13:19.860 { 00:13:19.860 "method": "bdev_wait_for_examine" 00:13:19.860 } 00:13:19.860 ] 00:13:19.860 } 00:13:19.860 ] 00:13:19.860 } 00:13:19.860 [2024-11-20 12:45:45.182267] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:13:19.860 [2024-11-20 12:45:45.182827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70102 ] 00:13:19.860 [2024-11-20 12:45:45.346150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.120 [2024-11-20 12:45:45.466393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.383 Running I/O for 5 seconds... 00:13:22.272 33925.00 IOPS, 132.52 MiB/s [2024-11-20T12:45:49.175Z] 34042.00 IOPS, 132.98 MiB/s [2024-11-20T12:45:50.111Z] 34318.00 IOPS, 134.05 MiB/s [2024-11-20T12:45:51.052Z] 35256.50 IOPS, 137.72 MiB/s [2024-11-20T12:45:51.052Z] 35478.20 IOPS, 138.59 MiB/s 00:13:25.533 Latency(us) 00:13:25.533 [2024-11-20T12:45:51.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.533 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:25.533 xnvme_bdev : 5.01 35449.22 138.47 0.00 0.00 1800.89 373.37 11846.89 00:13:25.533 [2024-11-20T12:45:51.052Z] =================================================================================================================== 00:13:25.533 [2024-11-20T12:45:51.052Z] Total : 35449.22 138.47 0.00 0.00 1800.89 373.37 11846.89 00:13:26.103 12:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:26.103 12:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:26.103 12:45:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:26.103 12:45:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:26.103 12:45:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:26.103 { 00:13:26.103 "subsystems": [ 00:13:26.103 { 00:13:26.103 "subsystem": "bdev", 00:13:26.103 "config": [ 00:13:26.103 { 00:13:26.103 "params": { 00:13:26.103 "io_mechanism": "io_uring", 00:13:26.103 "conserve_cpu": false, 00:13:26.103 "filename": "/dev/nvme0n1", 00:13:26.103 "name": "xnvme_bdev" 00:13:26.103 }, 00:13:26.103 "method": "bdev_xnvme_create" 00:13:26.103 }, 00:13:26.103 { 00:13:26.103 "method": "bdev_wait_for_examine" 00:13:26.103 } 00:13:26.103 ] 00:13:26.103 } 00:13:26.103 ] 00:13:26.103 } 00:13:26.103 [2024-11-20 12:45:51.575758] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:13:26.103 [2024-11-20 12:45:51.575870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70178 ] 00:13:26.363 [2024-11-20 12:45:51.737014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.363 [2024-11-20 12:45:51.838573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.621 Running I/O for 5 seconds... 00:13:28.574 38988.00 IOPS, 152.30 MiB/s [2024-11-20T12:45:55.479Z] 37790.50 IOPS, 147.62 MiB/s [2024-11-20T12:45:56.423Z] 36906.67 IOPS, 144.17 MiB/s [2024-11-20T12:45:57.366Z] 36499.50 IOPS, 142.58 MiB/s 00:13:31.847 Latency(us) 00:13:31.847 [2024-11-20T12:45:57.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.847 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:31.847 xnvme_bdev : 5.00 36145.19 141.19 0.00 0.00 1765.90 270.97 9779.99 00:13:31.847 [2024-11-20T12:45:57.366Z] =================================================================================================================== 00:13:31.847 [2024-11-20T12:45:57.366Z] Total : 36145.19 141.19 0.00 0.00 1765.90 270.97 9779.99 00:13:32.417 00:13:32.417 real 0m12.755s 00:13:32.417 user 0m5.756s 00:13:32.417 sys 0m6.696s 00:13:32.417 ************************************ 00:13:32.417 END TEST xnvme_bdevperf 00:13:32.417 ************************************ 00:13:32.417 12:45:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.417 12:45:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:32.417 12:45:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:32.417 12:45:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:32.417 12:45:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.417 12:45:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.679 ************************************ 00:13:32.679 START TEST xnvme_fio_plugin 00:13:32.679 ************************************ 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:32.679 12:45:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:32.679 { 00:13:32.679 "subsystems": [ 00:13:32.679 { 00:13:32.679 "subsystem": "bdev", 00:13:32.679 "config": [ 00:13:32.679 { 00:13:32.679 "params": { 00:13:32.679 "io_mechanism": "io_uring", 00:13:32.679 "conserve_cpu": false, 00:13:32.679 "filename": "/dev/nvme0n1", 00:13:32.679 "name": "xnvme_bdev" 00:13:32.679 }, 00:13:32.679 "method": "bdev_xnvme_create" 00:13:32.679 }, 00:13:32.679 { 00:13:32.679 "method": "bdev_wait_for_examine" 00:13:32.679 } 00:13:32.679 ] 00:13:32.679 } 00:13:32.679 ] 00:13:32.679 } 00:13:32.679 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:32.679 fio-3.35 00:13:32.679 Starting 1 thread 00:13:39.322 00:13:39.322 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70297: Wed Nov 20 12:46:03 2024 00:13:39.322 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(695MiB/5001msec) 00:13:39.322 slat (usec): min=2, max=339, avg= 3.83, stdev= 2.63 00:13:39.322 clat (usec): min=784, max=3952, avg=1643.32, stdev=323.60 00:13:39.322 lat (usec): min=787, max=3980, avg=1647.14, stdev=324.02 00:13:39.322 clat percentiles (usec): 00:13:39.322 | 1.00th=[ 1004], 5.00th=[ 1156], 10.00th=[ 1254], 20.00th=[ 1369], 00:13:39.322 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1614], 60.00th=[ 1696], 00:13:39.322 | 70.00th=[ 1795], 80.00th=[ 1909], 90.00th=[ 2057], 95.00th=[ 2212], 00:13:39.322 | 99.00th=[ 2507], 99.50th=[ 2671], 99.90th=[ 3130], 99.95th=[ 3195], 00:13:39.322 | 99.99th=[ 3785] 00:13:39.322 bw ( KiB/s): min=132096, max=156160, per=100.00%, avg=143388.44, stdev=8432.90, samples=9 00:13:39.322 iops : min=33024, max=39040, avg=35847.11, stdev=2108.23, samples=9 00:13:39.322 lat (usec) : 1000=0.94% 00:13:39.322 lat (msec) : 2=85.83%, 4=13.24% 00:13:39.322 cpu : usr=31.02%, sys=67.12%, ctx=13, majf=0, minf=762 00:13:39.322 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:39.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.322 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:39.322 issued rwts: total=177792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.322 00:13:39.322 Run status group 0 (all jobs): 00:13:39.322 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=695MiB (728MB), run=5001-5001msec 00:13:39.322 ----------------------------------------------------- 00:13:39.322 Suppressions used: 00:13:39.322 count bytes template 00:13:39.322 1 11 /usr/src/fio/parse.c 00:13:39.322 1 8 libtcmalloc_minimal.so 00:13:39.322 1 904 libcrypto.so 00:13:39.322 ----------------------------------------------------- 00:13:39.322 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:39.584 12:46:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:39.584 { 00:13:39.584 "subsystems": [ 00:13:39.584 { 00:13:39.584 "subsystem": "bdev", 00:13:39.584 "config": [ 00:13:39.584 { 00:13:39.584 "params": { 00:13:39.584 "io_mechanism": "io_uring", 00:13:39.584 "conserve_cpu": false, 00:13:39.584 "filename": "/dev/nvme0n1", 00:13:39.584 "name": "xnvme_bdev" 00:13:39.584 }, 00:13:39.584 "method": "bdev_xnvme_create" 00:13:39.584 }, 00:13:39.584 { 00:13:39.584 "method": "bdev_wait_for_examine" 00:13:39.584 } 00:13:39.584 ] 00:13:39.584 } 00:13:39.584 ] 00:13:39.584 } 00:13:39.584 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:39.584 fio-3.35 00:13:39.584 Starting 1 thread 00:13:46.190 00:13:46.190 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70383: Wed Nov 20 12:46:10 2024 00:13:46.190 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(665MiB/5001msec); 0 zone resets 00:13:46.190 slat (usec): min=2, max=138, avg= 4.10, stdev= 2.42 00:13:46.190 clat (usec): min=253, max=10604, avg=1711.75, stdev=308.22 00:13:46.190 lat (usec): min=256, max=10607, avg=1715.85, stdev=308.64 00:13:46.190 clat percentiles (usec): 00:13:46.190 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1467], 00:13:46.190 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1745], 00:13:46.190 | 70.00th=[ 1827], 80.00th=[ 1926], 90.00th=[ 2089], 95.00th=[ 2245], 00:13:46.190 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3294], 99.95th=[ 3851], 00:13:46.190 | 99.99th=[ 6456] 00:13:46.190 bw ( KiB/s): min=130616, max=141672, per=100.00%, avg=136249.78, stdev=3164.41, samples=9 00:13:46.190 iops : min=32654, max=35418, avg=34062.44, stdev=791.10, samples=9 00:13:46.190 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:13:46.190 lat (msec) : 2=84.89%, 4=15.02%, 10=0.04%, 20=0.01% 00:13:46.190 cpu : usr=30.39%, sys=68.03%, ctx=12, majf=0, minf=762 00:13:46.190 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:46.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.190 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:46.190 issued rwts: total=0,170213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.190 00:13:46.190 Run status group 0 (all jobs): 00:13:46.190 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=665MiB (697MB), run=5001-5001msec 00:13:46.452 ----------------------------------------------------- 00:13:46.452 Suppressions used: 00:13:46.452 count bytes template 00:13:46.452 1 11 /usr/src/fio/parse.c 00:13:46.452 1 8 libtcmalloc_minimal.so 00:13:46.452 1 904 libcrypto.so 00:13:46.452 ----------------------------------------------------- 00:13:46.452 00:13:46.452 ************************************ 00:13:46.452 END TEST xnvme_fio_plugin 00:13:46.452 ************************************ 00:13:46.452 00:13:46.452 real 0m13.865s 00:13:46.452 user 0m6.028s 00:13:46.452 sys 0m7.327s 00:13:46.452 12:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.452 12:46:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:46.452 12:46:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:46.452 12:46:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:46.452 12:46:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:46.452 12:46:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:46.452 12:46:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:46.452 12:46:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.452 12:46:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.452 ************************************ 00:13:46.452 START TEST xnvme_rpc 00:13:46.452 ************************************ 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:46.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70473 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70473 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70473 ']' 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.452 12:46:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.452 [2024-11-20 12:46:11.955962] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:13:46.452 [2024-11-20 12:46:11.956331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70473 ] 00:13:46.715 [2024-11-20 12:46:12.113988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.978 [2024-11-20 12:46:12.234049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.551 xnvme_bdev 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:47.551 12:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.551 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70473 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70473 ']' 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70473 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70473 00:13:47.813 killing process with pid 70473 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70473' 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70473 00:13:47.813 12:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70473 00:13:49.731 00:13:49.731 real 0m2.884s 00:13:49.731 user 0m2.890s 00:13:49.731 sys 0m0.472s 00:13:49.731 12:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.731 12:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.731 ************************************ 00:13:49.731 END TEST xnvme_rpc 00:13:49.731 ************************************ 00:13:49.731 12:46:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:49.731 12:46:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.731 12:46:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.731 12:46:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.731 ************************************ 00:13:49.731 START TEST xnvme_bdevperf 00:13:49.731 ************************************ 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:49.731 12:46:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:49.731 { 00:13:49.731 "subsystems": [ 00:13:49.731 { 00:13:49.731 "subsystem": "bdev", 00:13:49.731 "config": [ 00:13:49.731 { 00:13:49.731 "params": { 00:13:49.731 "io_mechanism": "io_uring", 00:13:49.731 "conserve_cpu": true, 00:13:49.731 "filename": "/dev/nvme0n1", 00:13:49.731 "name": "xnvme_bdev" 00:13:49.731 }, 00:13:49.731 "method": "bdev_xnvme_create" 00:13:49.731 }, 00:13:49.731 { 00:13:49.731 "method": "bdev_wait_for_examine" 00:13:49.731 } 00:13:49.731 ] 00:13:49.731 } 00:13:49.731 ] 00:13:49.731 } 00:13:49.731 [2024-11-20 12:46:14.887776] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:13:49.731 [2024-11-20 12:46:14.887922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70547 ] 00:13:49.731 [2024-11-20 12:46:15.052470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.731 [2024-11-20 12:46:15.180255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.992 Running I/O for 5 seconds... 00:13:52.325 34100.00 IOPS, 133.20 MiB/s [2024-11-20T12:46:18.790Z] 35140.50 IOPS, 137.27 MiB/s [2024-11-20T12:46:19.776Z] 34813.00 IOPS, 135.99 MiB/s [2024-11-20T12:46:20.721Z] 34660.75 IOPS, 135.39 MiB/s [2024-11-20T12:46:20.721Z] 34853.20 IOPS, 136.15 MiB/s 00:13:55.202 Latency(us) 00:13:55.202 [2024-11-20T12:46:20.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.202 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:55.202 xnvme_bdev : 5.01 34809.39 135.97 0.00 0.00 1833.37 901.12 12149.37 00:13:55.202 [2024-11-20T12:46:20.721Z] =================================================================================================================== 00:13:55.202 [2024-11-20T12:46:20.721Z] Total : 34809.39 135.97 0.00 0.00 1833.37 901.12 12149.37 00:13:55.776 12:46:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:55.776 12:46:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:55.776 12:46:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:55.776 12:46:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:55.776 12:46:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:55.776 { 00:13:55.776 "subsystems": [ 00:13:55.776 { 00:13:55.776 "subsystem": "bdev", 00:13:55.776 "config": [ 00:13:55.776 { 00:13:55.776 "params": { 00:13:55.776 "io_mechanism": "io_uring", 00:13:55.776 "conserve_cpu": true, 00:13:55.776 "filename": "/dev/nvme0n1", 00:13:55.776 "name": "xnvme_bdev" 00:13:55.776 }, 00:13:55.776 "method": "bdev_xnvme_create" 00:13:55.776 }, 00:13:55.776 { 00:13:55.776 "method": "bdev_wait_for_examine" 00:13:55.776 } 00:13:55.776 ] 00:13:55.776 } 00:13:55.776 ] 00:13:55.776 } 00:13:56.037 [2024-11-20 12:46:21.319266] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:13:56.037 [2024-11-20 12:46:21.319410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70617 ] 00:13:56.037 [2024-11-20 12:46:21.485084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.298 [2024-11-20 12:46:21.604323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.559 Running I/O for 5 seconds... 00:13:58.447 35394.00 IOPS, 138.26 MiB/s [2024-11-20T12:46:24.910Z] 35714.50 IOPS, 139.51 MiB/s [2024-11-20T12:46:26.295Z] 35810.67 IOPS, 139.89 MiB/s [2024-11-20T12:46:27.240Z] 35815.50 IOPS, 139.90 MiB/s 00:14:01.721 Latency(us) 00:14:01.721 [2024-11-20T12:46:27.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.721 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:01.721 xnvme_bdev : 5.00 35598.67 139.06 0.00 0.00 1793.13 633.30 9477.51 00:14:01.721 [2024-11-20T12:46:27.240Z] =================================================================================================================== 00:14:01.721 [2024-11-20T12:46:27.240Z] Total : 35598.67 139.06 0.00 0.00 1793.13 633.30 9477.51 00:14:02.294 00:14:02.294 real 0m12.847s 00:14:02.294 user 0m6.549s 00:14:02.294 sys 0m5.642s 00:14:02.294 12:46:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.294 ************************************ 00:14:02.294 END TEST xnvme_bdevperf 00:14:02.294 ************************************ 00:14:02.294 12:46:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.294 12:46:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:02.294 12:46:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:02.294 12:46:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.294 12:46:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:02.294 ************************************ 00:14:02.294 START TEST xnvme_fio_plugin 00:14:02.294 ************************************ 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:02.294 12:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.294 { 00:14:02.294 "subsystems": [ 00:14:02.294 { 00:14:02.294 "subsystem": "bdev", 00:14:02.294 "config": [ 00:14:02.294 { 00:14:02.294 "params": { 00:14:02.294 "io_mechanism": "io_uring", 00:14:02.295 "conserve_cpu": true, 00:14:02.295 "filename": "/dev/nvme0n1", 00:14:02.295 "name": "xnvme_bdev" 00:14:02.295 }, 00:14:02.295 "method": "bdev_xnvme_create" 00:14:02.295 }, 00:14:02.295 { 00:14:02.295 "method": "bdev_wait_for_examine" 00:14:02.295 } 00:14:02.295 ] 00:14:02.295 } 00:14:02.295 ] 00:14:02.295 } 00:14:02.557 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:02.557 fio-3.35 00:14:02.557 Starting 1 thread 00:14:09.150 00:14:09.150 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70742: Wed Nov 20 12:46:33 2024 00:14:09.150 read: IOPS=34.8k, BW=136MiB/s (143MB/s)(680MiB/5002msec) 00:14:09.150 slat (nsec): min=2731, max=65340, avg=3570.95, stdev=1837.03 00:14:09.150 clat (usec): min=906, max=3551, avg=1692.11, stdev=267.71 00:14:09.150 lat (usec): min=909, max=3587, avg=1695.68, stdev=267.90 00:14:09.150 clat percentiles (usec): 00:14:09.150 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 1385], 20.00th=[ 1467], 00:14:09.150 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1729], 00:14:09.150 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2057], 95.00th=[ 2180], 00:14:09.150 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2835], 99.95th=[ 3064], 00:14:09.150 | 99.99th=[ 3425] 00:14:09.150 bw ( KiB/s): min=133120, max=144031, per=100.00%, avg=139736.78, stdev=3611.38, samples=9 00:14:09.150 iops : min=33280, max=36007, avg=34934.11, stdev=902.73, samples=9 00:14:09.150 lat (usec) : 1000=0.03% 00:14:09.150 lat (msec) : 2=87.10%, 4=12.86% 00:14:09.150 cpu : usr=46.65%, sys=49.05%, ctx=17, majf=0, minf=762 00:14:09.150 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:09.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.150 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:09.150 issued rwts: total=174080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.150 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:09.150 00:14:09.150 Run status group 0 (all jobs): 00:14:09.150 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=680MiB (713MB), run=5002-5002msec 00:14:09.150 ----------------------------------------------------- 00:14:09.150 Suppressions used: 00:14:09.150 count bytes template 00:14:09.150 1 11 /usr/src/fio/parse.c 00:14:09.150 1 8 libtcmalloc_minimal.so 00:14:09.150 1 904 libcrypto.so 00:14:09.150 ----------------------------------------------------- 00:14:09.150 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:09.150 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:09.151 12:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.151 { 00:14:09.151 "subsystems": [ 00:14:09.151 { 00:14:09.151 "subsystem": "bdev", 00:14:09.151 "config": [ 00:14:09.151 { 00:14:09.151 "params": { 00:14:09.151 "io_mechanism": "io_uring", 00:14:09.151 "conserve_cpu": true, 00:14:09.151 "filename": "/dev/nvme0n1", 00:14:09.151 "name": "xnvme_bdev" 00:14:09.151 }, 00:14:09.151 "method": "bdev_xnvme_create" 00:14:09.151 }, 00:14:09.151 { 00:14:09.151 "method": "bdev_wait_for_examine" 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 } 00:14:09.151 ] 00:14:09.151 } 00:14:09.412 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:09.412 fio-3.35 00:14:09.412 Starting 1 thread 00:14:16.026 00:14:16.027 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70828: Wed Nov 20 12:46:40 2024 00:14:16.027 write: IOPS=34.9k, BW=136MiB/s (143MB/s)(682MiB/5001msec); 0 zone resets 00:14:16.027 slat (usec): min=2, max=105, avg= 4.04, stdev= 2.33 00:14:16.027 clat (usec): min=367, max=6115, avg=1666.69, stdev=277.82 00:14:16.027 lat (usec): min=374, max=6118, avg=1670.73, stdev=278.30 00:14:16.027 clat percentiles (usec): 00:14:16.027 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1450], 00:14:16.027 | 30.00th=[ 1516], 40.00th=[ 1565], 50.00th=[ 1631], 60.00th=[ 1696], 00:14:16.027 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2024], 95.00th=[ 2180], 00:14:16.027 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 3163], 99.95th=[ 3621], 00:14:16.027 | 99.99th=[ 4490] 00:14:16.027 bw ( KiB/s): min=132608, max=149464, per=100.00%, avg=139937.44, stdev=4758.56, samples=9 00:14:16.027 iops : min=33152, max=37366, avg=34984.22, stdev=1189.63, samples=9 00:14:16.027 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:14:16.027 lat (msec) : 2=89.07%, 4=10.87%, 10=0.05% 00:14:16.027 cpu : usr=43.50%, sys=51.78%, ctx=10, majf=0, minf=762 00:14:16.027 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:16.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.027 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:16.027 issued rwts: total=0,174633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:16.027 00:14:16.027 Run status group 0 (all jobs): 00:14:16.027 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=682MiB (715MB), run=5001-5001msec 00:14:16.027 ----------------------------------------------------- 00:14:16.027 Suppressions used: 00:14:16.027 count bytes template 00:14:16.027 1 11 /usr/src/fio/parse.c 00:14:16.027 1 8 libtcmalloc_minimal.so 00:14:16.027 1 904 libcrypto.so 00:14:16.027 ----------------------------------------------------- 00:14:16.027 00:14:16.027 00:14:16.027 real 0m13.770s 00:14:16.027 user 0m7.407s 00:14:16.027 sys 0m5.586s 00:14:16.027 12:46:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.027 ************************************ 00:14:16.027 END TEST xnvme_fio_plugin 00:14:16.027 ************************************ 00:14:16.027 12:46:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:16.287 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:16.287 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:16.287 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:16.287 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:16.287 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:16.288 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:16.288 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:16.288 12:46:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:16.288 12:46:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:16.288 12:46:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.288 12:46:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:16.288 ************************************ 00:14:16.288 START TEST xnvme_rpc 00:14:16.288 ************************************ 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70914 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70914 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70914 ']' 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.288 12:46:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.288 [2024-11-20 12:46:41.669894] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:14:16.288 [2024-11-20 12:46:41.670047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70914 ] 00:14:16.549 [2024-11-20 12:46:41.833037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.549 [2024-11-20 12:46:41.957082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.494 xnvme_bdev 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70914 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70914 ']' 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70914 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70914 00:14:17.494 killing process with pid 70914 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.494 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.495 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70914' 00:14:17.495 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70914 00:14:17.495 12:46:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70914 00:14:19.444 ************************************ 00:14:19.444 END TEST xnvme_rpc 00:14:19.444 ************************************ 00:14:19.444 00:14:19.444 real 0m2.980s 00:14:19.444 user 0m2.964s 00:14:19.444 sys 0m0.491s 00:14:19.444 12:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.444 12:46:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.444 12:46:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:19.444 12:46:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:19.444 12:46:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.444 12:46:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.444 ************************************ 00:14:19.444 START TEST xnvme_bdevperf 00:14:19.444 ************************************ 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:19.444 12:46:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.444 { 00:14:19.444 "subsystems": [ 00:14:19.444 { 00:14:19.444 "subsystem": "bdev", 00:14:19.444 "config": [ 00:14:19.444 { 00:14:19.444 "params": { 00:14:19.444 "io_mechanism": "io_uring_cmd", 00:14:19.444 "conserve_cpu": false, 00:14:19.444 "filename": "/dev/ng0n1", 00:14:19.444 "name": "xnvme_bdev" 00:14:19.444 }, 00:14:19.444 "method": "bdev_xnvme_create" 00:14:19.444 }, 00:14:19.444 { 00:14:19.444 "method": "bdev_wait_for_examine" 00:14:19.444 } 00:14:19.444 ] 00:14:19.444 } 00:14:19.444 ] 00:14:19.444 } 00:14:19.444 [2024-11-20 12:46:44.699946] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:14:19.444 [2024-11-20 12:46:44.700090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70989 ] 00:14:19.444 [2024-11-20 12:46:44.864515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.705 [2024-11-20 12:46:44.988380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.966 Running I/O for 5 seconds... 00:14:21.855 36862.00 IOPS, 143.99 MiB/s [2024-11-20T12:46:48.321Z] 40217.50 IOPS, 157.10 MiB/s [2024-11-20T12:46:49.708Z] 38815.33 IOPS, 151.62 MiB/s [2024-11-20T12:46:50.717Z] 38356.50 IOPS, 149.83 MiB/s [2024-11-20T12:46:50.717Z] 38952.60 IOPS, 152.16 MiB/s 00:14:25.198 Latency(us) 00:14:25.198 [2024-11-20T12:46:50.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.198 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:25.198 xnvme_bdev : 5.00 38936.75 152.10 0.00 0.00 1639.72 497.82 11796.48 00:14:25.198 [2024-11-20T12:46:50.718Z] =================================================================================================================== 00:14:25.199 [2024-11-20T12:46:50.718Z] Total : 38936.75 152.10 0.00 0.00 1639.72 497.82 11796.48 00:14:25.772 12:46:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.772 12:46:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:25.772 12:46:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:25.772 12:46:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:25.772 12:46:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:25.772 { 00:14:25.772 "subsystems": [ 00:14:25.772 { 00:14:25.772 "subsystem": "bdev", 00:14:25.772 "config": [ 00:14:25.772 { 00:14:25.772 "params": { 00:14:25.772 "io_mechanism": "io_uring_cmd", 00:14:25.772 "conserve_cpu": false, 00:14:25.772 "filename": "/dev/ng0n1", 00:14:25.772 "name": "xnvme_bdev" 00:14:25.772 }, 00:14:25.772 "method": "bdev_xnvme_create" 00:14:25.772 }, 00:14:25.772 { 00:14:25.772 "method": "bdev_wait_for_examine" 00:14:25.772 } 00:14:25.772 ] 00:14:25.772 } 00:14:25.772 ] 00:14:25.772 } 00:14:25.772 [2024-11-20 12:46:51.055595] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:14:25.772 [2024-11-20 12:46:51.055866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71059 ] 00:14:25.772 [2024-11-20 12:46:51.216935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.033 [2024-11-20 12:46:51.313024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.033 Running I/O for 5 seconds... 00:14:28.363 57664.00 IOPS, 225.25 MiB/s [2024-11-20T12:46:54.826Z] 54811.50 IOPS, 214.11 MiB/s [2024-11-20T12:46:55.771Z] 49130.33 IOPS, 191.92 MiB/s [2024-11-20T12:46:56.714Z] 45875.50 IOPS, 179.20 MiB/s [2024-11-20T12:46:56.714Z] 43868.00 IOPS, 171.36 MiB/s 00:14:31.195 Latency(us) 00:14:31.195 [2024-11-20T12:46:56.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.195 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:31.196 xnvme_bdev : 5.01 43818.71 171.17 0.00 0.00 1456.17 374.94 4461.49 00:14:31.196 [2024-11-20T12:46:56.715Z] =================================================================================================================== 00:14:31.196 [2024-11-20T12:46:56.715Z] Total : 43818.71 171.17 0.00 0.00 1456.17 374.94 4461.49 00:14:32.139 12:46:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.139 12:46:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:32.139 12:46:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.139 12:46:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.139 12:46:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.139 { 00:14:32.139 "subsystems": [ 00:14:32.139 { 00:14:32.139 "subsystem": "bdev", 00:14:32.139 "config": [ 00:14:32.139 { 00:14:32.139 "params": { 00:14:32.139 "io_mechanism": "io_uring_cmd", 00:14:32.139 "conserve_cpu": false, 00:14:32.139 "filename": "/dev/ng0n1", 00:14:32.139 "name": "xnvme_bdev" 00:14:32.139 }, 00:14:32.139 "method": "bdev_xnvme_create" 00:14:32.139 }, 00:14:32.139 { 00:14:32.139 "method": "bdev_wait_for_examine" 00:14:32.139 } 00:14:32.139 ] 00:14:32.139 } 00:14:32.139 ] 00:14:32.139 } 00:14:32.139 [2024-11-20 12:46:57.387409] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:14:32.139 [2024-11-20 12:46:57.388018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71139 ] 00:14:32.139 [2024-11-20 12:46:57.553828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.400 [2024-11-20 12:46:57.678273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.661 Running I/O for 5 seconds... 00:14:34.556 76864.00 IOPS, 300.25 MiB/s [2024-11-20T12:47:01.018Z] 75968.00 IOPS, 296.75 MiB/s [2024-11-20T12:47:02.406Z] 76821.33 IOPS, 300.08 MiB/s [2024-11-20T12:47:02.981Z] 77584.00 IOPS, 303.06 MiB/s 00:14:37.462 Latency(us) 00:14:37.462 [2024-11-20T12:47:02.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.462 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:37.462 xnvme_bdev : 5.00 77405.34 302.36 0.00 0.00 823.45 532.48 2596.23 00:14:37.462 [2024-11-20T12:47:02.981Z] =================================================================================================================== 00:14:37.462 [2024-11-20T12:47:02.981Z] Total : 77405.34 302.36 0.00 0.00 823.45 532.48 2596.23 00:14:38.406 12:47:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.406 12:47:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:38.406 12:47:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.406 12:47:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.406 12:47:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.406 { 00:14:38.406 "subsystems": [ 00:14:38.406 { 00:14:38.406 "subsystem": "bdev", 00:14:38.406 "config": [ 00:14:38.406 { 00:14:38.406 "params": { 00:14:38.406 "io_mechanism": "io_uring_cmd", 00:14:38.406 "conserve_cpu": false, 00:14:38.406 "filename": "/dev/ng0n1", 00:14:38.406 "name": "xnvme_bdev" 00:14:38.406 }, 00:14:38.406 "method": "bdev_xnvme_create" 00:14:38.406 }, 00:14:38.406 { 00:14:38.406 "method": "bdev_wait_for_examine" 00:14:38.406 } 00:14:38.406 ] 00:14:38.406 } 00:14:38.406 ] 00:14:38.406 } 00:14:38.406 [2024-11-20 12:47:03.812100] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:14:38.407 [2024-11-20 12:47:03.812431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71213 ] 00:14:38.669 [2024-11-20 12:47:03.978082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.669 [2024-11-20 12:47:04.098546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.931 Running I/O for 5 seconds... 00:14:41.256 37053.00 IOPS, 144.74 MiB/s [2024-11-20T12:47:07.716Z] 21546.50 IOPS, 84.17 MiB/s [2024-11-20T12:47:08.658Z] 14413.00 IOPS, 56.30 MiB/s [2024-11-20T12:47:09.600Z] 10849.50 IOPS, 42.38 MiB/s [2024-11-20T12:47:09.860Z] 8725.60 IOPS, 34.08 MiB/s 00:14:44.341 Latency(us) 00:14:44.341 [2024-11-20T12:47:09.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.341 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:44.341 xnvme_bdev : 5.26 8312.43 32.47 0.00 0.00 7513.04 89.40 664635.86 00:14:44.341 [2024-11-20T12:47:09.860Z] =================================================================================================================== 00:14:44.341 [2024-11-20T12:47:09.860Z] Total : 8312.43 32.47 0.00 0.00 7513.04 89.40 664635.86 00:14:44.911 ************************************ 00:14:44.911 END TEST xnvme_bdevperf 00:14:44.911 ************************************ 00:14:44.911 00:14:44.911 real 0m25.559s 00:14:44.911 user 0m13.996s 00:14:44.911 sys 0m11.057s 00:14:44.911 12:47:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.911 12:47:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 12:47:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:44.911 12:47:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.911 12:47:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.911 12:47:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 ************************************ 00:14:44.911 START TEST xnvme_fio_plugin 00:14:44.911 ************************************ 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:44.911 12:47:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.911 { 00:14:44.911 "subsystems": [ 00:14:44.911 { 00:14:44.911 "subsystem": "bdev", 00:14:44.911 "config": [ 00:14:44.911 { 00:14:44.911 "params": { 00:14:44.911 "io_mechanism": "io_uring_cmd", 00:14:44.911 "conserve_cpu": false, 00:14:44.911 "filename": "/dev/ng0n1", 00:14:44.911 "name": "xnvme_bdev" 00:14:44.911 }, 00:14:44.911 "method": "bdev_xnvme_create" 00:14:44.911 }, 00:14:44.911 { 00:14:44.911 "method": "bdev_wait_for_examine" 00:14:44.911 } 00:14:44.911 ] 00:14:44.911 } 00:14:44.911 ] 00:14:44.911 } 00:14:45.172 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:45.172 fio-3.35 00:14:45.172 Starting 1 thread 00:14:51.760 00:14:51.760 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71326: Wed Nov 20 12:47:15 2024 00:14:51.760 read: IOPS=38.6k, BW=151MiB/s (158MB/s)(755MiB/5003msec) 00:14:51.760 slat (usec): min=2, max=138, avg= 3.84, stdev= 2.15 00:14:51.760 clat (usec): min=618, max=3661, avg=1498.57, stdev=304.97 00:14:51.760 lat (usec): min=621, max=3664, avg=1502.42, stdev=305.45 00:14:51.761 clat percentiles (usec): 00:14:51.761 | 1.00th=[ 914], 5.00th=[ 1037], 10.00th=[ 1123], 20.00th=[ 1237], 00:14:51.761 | 30.00th=[ 1336], 40.00th=[ 1401], 50.00th=[ 1483], 60.00th=[ 1549], 00:14:51.761 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1893], 95.00th=[ 2040], 00:14:51.761 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2704], 99.95th=[ 2802], 00:14:51.761 | 99.99th=[ 3130] 00:14:51.761 bw ( KiB/s): min=146432, max=177664, per=100.00%, avg=155944.56, stdev=11153.58, samples=9 00:14:51.761 iops : min=36608, max=44416, avg=38986.00, stdev=2788.42, samples=9 00:14:51.761 lat (usec) : 750=0.06%, 1000=3.26% 00:14:51.761 lat (msec) : 2=90.53%, 4=6.15% 00:14:51.761 cpu : usr=34.65%, sys=63.85%, ctx=15, majf=0, minf=762 00:14:51.761 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:51.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.761 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:51.761 issued rwts: total=193339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:51.761 00:14:51.761 Run status group 0 (all jobs): 00:14:51.761 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=755MiB (792MB), run=5003-5003msec 00:14:51.761 ----------------------------------------------------- 00:14:51.761 Suppressions used: 00:14:51.761 count bytes template 00:14:51.761 1 11 /usr/src/fio/parse.c 00:14:51.761 1 8 libtcmalloc_minimal.so 00:14:51.761 1 904 libcrypto.so 00:14:51.761 ----------------------------------------------------- 00:14:51.761 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.761 12:47:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.761 { 00:14:51.761 "subsystems": [ 00:14:51.761 { 00:14:51.761 "subsystem": "bdev", 00:14:51.761 "config": [ 00:14:51.761 { 00:14:51.761 "params": { 00:14:51.761 "io_mechanism": "io_uring_cmd", 00:14:51.761 "conserve_cpu": false, 00:14:51.761 "filename": "/dev/ng0n1", 00:14:51.761 "name": "xnvme_bdev" 00:14:51.761 }, 00:14:51.761 "method": "bdev_xnvme_create" 00:14:51.761 }, 00:14:51.761 { 00:14:51.761 "method": "bdev_wait_for_examine" 00:14:51.761 } 00:14:51.761 ] 00:14:51.761 } 00:14:51.761 ] 00:14:51.761 } 00:14:51.761 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.761 fio-3.35 00:14:51.761 Starting 1 thread 00:14:58.358 00:14:58.358 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71421: Wed Nov 20 12:47:22 2024 00:14:58.358 write: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(240MiB/5010msec); 0 zone resets 00:14:58.358 slat (nsec): min=2774, max=73509, avg=3986.37, stdev=2556.91 00:14:58.358 clat (usec): min=61, max=27655, avg=5192.39, stdev=6171.12 00:14:58.358 lat (usec): min=64, max=27659, avg=5196.37, stdev=6171.14 00:14:58.358 clat percentiles (usec): 00:14:58.358 | 1.00th=[ 151], 5.00th=[ 334], 10.00th=[ 416], 20.00th=[ 619], 00:14:58.358 | 30.00th=[ 717], 40.00th=[ 799], 50.00th=[ 922], 60.00th=[ 1336], 00:14:58.358 | 70.00th=[10683], 80.00th=[12649], 90.00th=[14353], 95.00th=[15795], 00:14:58.358 | 99.00th=[19006], 99.50th=[21365], 99.90th=[24511], 99.95th=[25560], 00:14:58.358 | 99.99th=[26870] 00:14:58.358 bw ( KiB/s): min=45496, max=55129, per=100.00%, avg=48996.90, stdev=2743.09, samples=10 00:14:58.358 iops : min=11374, max=13782, avg=12249.20, stdev=685.71, samples=10 00:14:58.358 lat (usec) : 100=0.10%, 250=2.87%, 500=11.57%, 750=19.75%, 1000=19.00% 00:14:58.358 lat (msec) : 2=10.22%, 4=0.66%, 10=3.92%, 20=31.19%, 50=0.72% 00:14:58.358 cpu : usr=33.40%, sys=65.74%, ctx=8, majf=0, minf=762 00:14:58.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.6%, 32=83.5%, >=64=15.8% 00:14:58.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.358 complete : 0=0.0%, 4=94.0%, 8=2.5%, 16=2.4%, 32=1.1%, 64=0.1%, >=64=0.0% 00:14:58.358 issued rwts: total=0,61320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.358 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.358 00:14:58.358 Run status group 0 (all jobs): 00:14:58.358 WRITE: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=240MiB (251MB), run=5010-5010msec 00:14:58.619 ----------------------------------------------------- 00:14:58.619 Suppressions used: 00:14:58.619 count bytes template 00:14:58.619 1 11 /usr/src/fio/parse.c 00:14:58.619 1 8 libtcmalloc_minimal.so 00:14:58.619 1 904 libcrypto.so 00:14:58.619 ----------------------------------------------------- 00:14:58.619 00:14:58.619 00:14:58.619 real 0m13.650s 00:14:58.619 user 0m6.162s 00:14:58.619 sys 0m7.055s 00:14:58.619 ************************************ 00:14:58.619 END TEST xnvme_fio_plugin 00:14:58.619 ************************************ 00:14:58.619 12:47:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.619 12:47:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:58.619 12:47:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:58.619 12:47:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:58.619 12:47:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:58.619 12:47:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:58.619 12:47:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.619 12:47:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.619 12:47:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.619 ************************************ 00:14:58.619 START TEST xnvme_rpc 00:14:58.619 ************************************ 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:58.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71502 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71502 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71502 ']' 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.619 12:47:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.619 [2024-11-20 12:47:24.052897] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:14:58.619 [2024-11-20 12:47:24.053049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71502 ] 00:14:58.882 [2024-11-20 12:47:24.218572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.882 [2024-11-20 12:47:24.340077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.825 xnvme_bdev 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71502 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71502 ']' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71502 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71502 00:14:59.825 killing process with pid 71502 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71502' 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71502 00:14:59.825 12:47:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71502 00:15:01.743 00:15:01.743 real 0m2.895s 00:15:01.743 user 0m2.915s 00:15:01.743 sys 0m0.463s 00:15:01.743 12:47:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.743 12:47:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 ************************************ 00:15:01.743 END TEST xnvme_rpc 00:15:01.743 ************************************ 00:15:01.743 12:47:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:01.743 12:47:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.743 12:47:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.743 12:47:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 ************************************ 00:15:01.743 START TEST xnvme_bdevperf 00:15:01.743 ************************************ 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:01.743 12:47:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 { 00:15:01.743 "subsystems": [ 00:15:01.743 { 00:15:01.743 "subsystem": "bdev", 00:15:01.743 "config": [ 00:15:01.743 { 00:15:01.743 "params": { 00:15:01.743 "io_mechanism": "io_uring_cmd", 00:15:01.743 "conserve_cpu": true, 00:15:01.743 "filename": "/dev/ng0n1", 00:15:01.743 "name": "xnvme_bdev" 00:15:01.743 }, 00:15:01.743 "method": "bdev_xnvme_create" 00:15:01.743 }, 00:15:01.743 { 00:15:01.743 "method": "bdev_wait_for_examine" 00:15:01.743 } 00:15:01.743 ] 00:15:01.743 } 00:15:01.743 ] 00:15:01.743 } 00:15:01.743 [2024-11-20 12:47:27.006238] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:01.743 [2024-11-20 12:47:27.006379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71576 ] 00:15:01.744 [2024-11-20 12:47:27.170762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.005 [2024-11-20 12:47:27.296238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.268 Running I/O for 5 seconds... 00:15:04.153 36160.00 IOPS, 141.25 MiB/s [2024-11-20T12:47:30.616Z] 34976.00 IOPS, 136.62 MiB/s [2024-11-20T12:47:32.004Z] 35029.33 IOPS, 136.83 MiB/s [2024-11-20T12:47:32.949Z] 35072.00 IOPS, 137.00 MiB/s 00:15:07.430 Latency(us) 00:15:07.430 [2024-11-20T12:47:32.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.430 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:07.430 xnvme_bdev : 5.00 35047.50 136.90 0.00 0.00 1821.87 951.53 4763.96 00:15:07.430 [2024-11-20T12:47:32.949Z] =================================================================================================================== 00:15:07.430 [2024-11-20T12:47:32.949Z] Total : 35047.50 136.90 0.00 0.00 1821.87 951.53 4763.96 00:15:08.001 12:47:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.001 12:47:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:08.001 12:47:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.001 12:47:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.001 12:47:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.001 { 00:15:08.001 "subsystems": [ 00:15:08.001 { 00:15:08.001 "subsystem": "bdev", 00:15:08.001 "config": [ 00:15:08.001 { 00:15:08.001 "params": { 00:15:08.001 "io_mechanism": "io_uring_cmd", 00:15:08.001 "conserve_cpu": true, 00:15:08.001 "filename": "/dev/ng0n1", 00:15:08.001 "name": "xnvme_bdev" 00:15:08.001 }, 00:15:08.001 "method": "bdev_xnvme_create" 00:15:08.001 }, 00:15:08.001 { 00:15:08.001 "method": "bdev_wait_for_examine" 00:15:08.001 } 00:15:08.001 ] 00:15:08.001 } 00:15:08.001 ] 00:15:08.001 } 00:15:08.001 [2024-11-20 12:47:33.425353] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:08.001 [2024-11-20 12:47:33.425687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71649 ] 00:15:08.259 [2024-11-20 12:47:33.590365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.259 [2024-11-20 12:47:33.684709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.517 Running I/O for 5 seconds... 00:15:10.840 62912.00 IOPS, 245.75 MiB/s [2024-11-20T12:47:36.944Z] 63136.00 IOPS, 246.62 MiB/s [2024-11-20T12:47:38.330Z] 60474.33 IOPS, 236.23 MiB/s [2024-11-20T12:47:39.265Z] 55026.00 IOPS, 214.95 MiB/s 00:15:13.746 Latency(us) 00:15:13.746 [2024-11-20T12:47:39.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.746 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:13.746 xnvme_bdev : 5.00 51857.89 202.57 0.00 0.00 1229.71 573.44 4612.73 00:15:13.746 [2024-11-20T12:47:39.265Z] =================================================================================================================== 00:15:13.746 [2024-11-20T12:47:39.265Z] Total : 51857.89 202.57 0.00 0.00 1229.71 573.44 4612.73 00:15:14.312 12:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.312 12:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:14.312 12:47:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:14.312 12:47:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:14.312 12:47:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.312 { 00:15:14.312 "subsystems": [ 00:15:14.312 { 00:15:14.312 "subsystem": "bdev", 00:15:14.312 "config": [ 00:15:14.312 { 00:15:14.312 "params": { 00:15:14.313 "io_mechanism": "io_uring_cmd", 00:15:14.313 "conserve_cpu": true, 00:15:14.313 "filename": "/dev/ng0n1", 00:15:14.313 "name": "xnvme_bdev" 00:15:14.313 }, 00:15:14.313 "method": "bdev_xnvme_create" 00:15:14.313 }, 00:15:14.313 { 00:15:14.313 "method": "bdev_wait_for_examine" 00:15:14.313 } 00:15:14.313 ] 00:15:14.313 } 00:15:14.313 ] 00:15:14.313 } 00:15:14.313 [2024-11-20 12:47:39.697798] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:14.313 [2024-11-20 12:47:39.697910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71719 ] 00:15:14.570 [2024-11-20 12:47:39.858124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.570 [2024-11-20 12:47:39.952191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.828 Running I/O for 5 seconds... 00:15:16.694 84160.00 IOPS, 328.75 MiB/s [2024-11-20T12:47:43.585Z] 84576.00 IOPS, 330.38 MiB/s [2024-11-20T12:47:44.521Z] 84800.00 IOPS, 331.25 MiB/s [2024-11-20T12:47:45.467Z] 85040.00 IOPS, 332.19 MiB/s 00:15:19.948 Latency(us) 00:15:19.948 [2024-11-20T12:47:45.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.948 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:19.948 xnvme_bdev : 5.00 85058.44 332.26 0.00 0.00 749.05 333.98 2772.68 00:15:19.948 [2024-11-20T12:47:45.467Z] =================================================================================================================== 00:15:19.948 [2024-11-20T12:47:45.467Z] Total : 85058.44 332.26 0.00 0.00 749.05 333.98 2772.68 00:15:20.522 12:47:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:20.522 12:47:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:20.522 12:47:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:20.522 12:47:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:20.522 12:47:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:20.522 { 00:15:20.522 "subsystems": [ 00:15:20.522 { 00:15:20.522 "subsystem": "bdev", 00:15:20.522 "config": [ 00:15:20.522 { 00:15:20.522 "params": { 00:15:20.522 "io_mechanism": "io_uring_cmd", 00:15:20.522 "conserve_cpu": true, 00:15:20.522 "filename": "/dev/ng0n1", 00:15:20.522 "name": "xnvme_bdev" 00:15:20.522 }, 00:15:20.522 "method": "bdev_xnvme_create" 00:15:20.522 }, 00:15:20.522 { 00:15:20.522 "method": "bdev_wait_for_examine" 00:15:20.522 } 00:15:20.522 ] 00:15:20.522 } 00:15:20.522 ] 00:15:20.522 } 00:15:20.784 [2024-11-20 12:47:46.062479] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:20.784 [2024-11-20 12:47:46.062626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71793 ] 00:15:20.784 [2024-11-20 12:47:46.229042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.045 [2024-11-20 12:47:46.350287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.306 Running I/O for 5 seconds... 00:15:23.186 37784.00 IOPS, 147.59 MiB/s [2024-11-20T12:47:49.650Z] 37736.50 IOPS, 147.41 MiB/s [2024-11-20T12:47:51.038Z] 35816.00 IOPS, 139.91 MiB/s [2024-11-20T12:47:51.980Z] 34135.25 IOPS, 133.34 MiB/s [2024-11-20T12:47:51.980Z] 34690.80 IOPS, 135.51 MiB/s 00:15:26.462 Latency(us) 00:15:26.462 [2024-11-20T12:47:51.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.462 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:26.462 xnvme_bdev : 5.01 34647.00 135.34 0.00 0.00 1841.22 49.62 53235.40 00:15:26.462 [2024-11-20T12:47:51.981Z] =================================================================================================================== 00:15:26.462 [2024-11-20T12:47:51.981Z] Total : 34647.00 135.34 0.00 0.00 1841.22 49.62 53235.40 00:15:27.036 00:15:27.036 real 0m25.473s 00:15:27.036 user 0m16.279s 00:15:27.036 sys 0m7.309s 00:15:27.036 12:47:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.036 12:47:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.036 ************************************ 00:15:27.036 END TEST xnvme_bdevperf 00:15:27.036 ************************************ 00:15:27.036 12:47:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:27.036 12:47:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:27.036 12:47:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.036 12:47:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.036 ************************************ 00:15:27.036 START TEST xnvme_fio_plugin 00:15:27.036 ************************************ 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.036 12:47:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.036 { 00:15:27.036 "subsystems": [ 00:15:27.036 { 00:15:27.036 "subsystem": "bdev", 00:15:27.036 "config": [ 00:15:27.036 { 00:15:27.036 "params": { 00:15:27.036 "io_mechanism": "io_uring_cmd", 00:15:27.036 "conserve_cpu": true, 00:15:27.036 "filename": "/dev/ng0n1", 00:15:27.036 "name": "xnvme_bdev" 00:15:27.036 }, 00:15:27.036 "method": "bdev_xnvme_create" 00:15:27.036 }, 00:15:27.036 { 00:15:27.036 "method": "bdev_wait_for_examine" 00:15:27.036 } 00:15:27.036 ] 00:15:27.036 } 00:15:27.036 ] 00:15:27.036 } 00:15:27.298 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:27.298 fio-3.35 00:15:27.298 Starting 1 thread 00:15:33.898 00:15:33.898 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71910: Wed Nov 20 12:47:58 2024 00:15:33.898 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(696MiB/5001msec) 00:15:33.898 slat (nsec): min=2727, max=77991, avg=3579.13, stdev=2029.83 00:15:33.898 clat (usec): min=874, max=3039, avg=1650.87, stdev=288.83 00:15:33.898 lat (usec): min=877, max=3063, avg=1654.45, stdev=289.19 00:15:33.898 clat percentiles (usec): 00:15:33.898 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1319], 20.00th=[ 1418], 00:15:33.898 | 30.00th=[ 1483], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:15:33.898 | 70.00th=[ 1778], 80.00th=[ 1876], 90.00th=[ 2040], 95.00th=[ 2180], 00:15:33.898 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2802], 99.95th=[ 2868], 00:15:33.898 | 99.99th=[ 2966] 00:15:33.898 bw ( KiB/s): min=138240, max=145408, per=99.92%, avg=142298.89, stdev=2616.89, samples=9 00:15:33.898 iops : min=34560, max=36352, avg=35574.67, stdev=654.25, samples=9 00:15:33.898 lat (usec) : 1000=0.14% 00:15:33.898 lat (msec) : 2=87.97%, 4=11.89% 00:15:33.898 cpu : usr=55.28%, sys=41.42%, ctx=7, majf=0, minf=762 00:15:33.898 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:33.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.898 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:33.898 issued rwts: total=178048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.898 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:33.898 00:15:33.898 Run status group 0 (all jobs): 00:15:33.898 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=696MiB (729MB), run=5001-5001msec 00:15:33.898 ----------------------------------------------------- 00:15:33.898 Suppressions used: 00:15:33.898 count bytes template 00:15:33.898 1 11 /usr/src/fio/parse.c 00:15:33.899 1 8 libtcmalloc_minimal.so 00:15:33.899 1 904 libcrypto.so 00:15:33.899 ----------------------------------------------------- 00:15:33.899 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:33.899 12:47:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:33.899 { 00:15:33.899 "subsystems": [ 00:15:33.899 { 00:15:33.899 "subsystem": "bdev", 00:15:33.899 "config": [ 00:15:33.899 { 00:15:33.899 "params": { 00:15:33.899 "io_mechanism": "io_uring_cmd", 00:15:33.899 "conserve_cpu": true, 00:15:33.899 "filename": "/dev/ng0n1", 00:15:33.899 "name": "xnvme_bdev" 00:15:33.899 }, 00:15:33.899 "method": "bdev_xnvme_create" 00:15:33.899 }, 00:15:33.899 { 00:15:33.899 "method": "bdev_wait_for_examine" 00:15:33.899 } 00:15:33.899 ] 00:15:33.899 } 00:15:33.899 ] 00:15:33.899 } 00:15:34.160 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:34.160 fio-3.35 00:15:34.160 Starting 1 thread 00:15:40.807 00:15:40.807 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72003: Wed Nov 20 12:48:05 2024 00:15:40.807 write: IOPS=36.8k, BW=144MiB/s (151MB/s)(719MiB/5001msec); 0 zone resets 00:15:40.807 slat (usec): min=2, max=307, avg= 3.98, stdev= 2.49 00:15:40.807 clat (usec): min=367, max=5361, avg=1576.44, stdev=298.74 00:15:40.807 lat (usec): min=372, max=5365, avg=1580.42, stdev=299.38 00:15:40.807 clat percentiles (usec): 00:15:40.807 | 1.00th=[ 1057], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1319], 00:15:40.807 | 30.00th=[ 1401], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1614], 00:15:40.807 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1958], 95.00th=[ 2114], 00:15:40.807 | 99.00th=[ 2442], 99.50th=[ 2606], 99.90th=[ 3359], 99.95th=[ 3621], 00:15:40.807 | 99.99th=[ 4490] 00:15:40.807 bw ( KiB/s): min=138091, max=173576, per=100.00%, avg=148489.22, stdev=10446.63, samples=9 00:15:40.807 iops : min=34522, max=43394, avg=37122.22, stdev=2611.75, samples=9 00:15:40.807 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.19% 00:15:40.807 lat (msec) : 2=91.64%, 4=8.16%, 10=0.01% 00:15:40.807 cpu : usr=51.48%, sys=44.58%, ctx=10, majf=0, minf=762 00:15:40.807 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:40.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.807 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:40.807 issued rwts: total=0,184127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.807 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.807 00:15:40.807 Run status group 0 (all jobs): 00:15:40.807 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=719MiB (754MB), run=5001-5001msec 00:15:40.807 ----------------------------------------------------- 00:15:40.807 Suppressions used: 00:15:40.807 count bytes template 00:15:40.807 1 11 /usr/src/fio/parse.c 00:15:40.807 1 8 libtcmalloc_minimal.so 00:15:40.807 1 904 libcrypto.so 00:15:40.807 ----------------------------------------------------- 00:15:40.807 00:15:40.807 ************************************ 00:15:40.807 END TEST xnvme_fio_plugin 00:15:40.807 ************************************ 00:15:40.807 00:15:40.807 real 0m13.765s 00:15:40.807 user 0m8.147s 00:15:40.807 sys 0m4.917s 00:15:40.807 12:48:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.807 12:48:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 Process with pid 71502 is not found 00:15:40.807 12:48:06 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71502 00:15:40.807 12:48:06 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71502 ']' 00:15:40.807 12:48:06 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71502 00:15:40.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71502) - No such process 00:15:40.807 12:48:06 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71502 is not found' 00:15:40.807 00:15:40.807 real 3m30.634s 00:15:40.807 user 1m53.735s 00:15:40.807 sys 1m22.467s 00:15:40.807 12:48:06 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.807 ************************************ 00:15:40.807 12:48:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 END TEST nvme_xnvme 00:15:40.807 ************************************ 00:15:41.069 12:48:06 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:41.069 12:48:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.069 12:48:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.069 12:48:06 -- common/autotest_common.sh@10 -- # set +x 00:15:41.069 ************************************ 00:15:41.069 START TEST blockdev_xnvme 00:15:41.070 ************************************ 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:41.070 * Looking for test storage... 00:15:41.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.070 12:48:06 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.070 --rc genhtml_branch_coverage=1 00:15:41.070 --rc genhtml_function_coverage=1 00:15:41.070 --rc genhtml_legend=1 00:15:41.070 --rc geninfo_all_blocks=1 00:15:41.070 --rc geninfo_unexecuted_blocks=1 00:15:41.070 00:15:41.070 ' 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.070 --rc genhtml_branch_coverage=1 00:15:41.070 --rc genhtml_function_coverage=1 00:15:41.070 --rc genhtml_legend=1 00:15:41.070 --rc geninfo_all_blocks=1 00:15:41.070 --rc geninfo_unexecuted_blocks=1 00:15:41.070 00:15:41.070 ' 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.070 --rc genhtml_branch_coverage=1 00:15:41.070 --rc genhtml_function_coverage=1 00:15:41.070 --rc genhtml_legend=1 00:15:41.070 --rc geninfo_all_blocks=1 00:15:41.070 --rc geninfo_unexecuted_blocks=1 00:15:41.070 00:15:41.070 ' 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.070 --rc genhtml_branch_coverage=1 00:15:41.070 --rc genhtml_function_coverage=1 00:15:41.070 --rc genhtml_legend=1 00:15:41.070 --rc geninfo_all_blocks=1 00:15:41.070 --rc geninfo_unexecuted_blocks=1 00:15:41.070 00:15:41.070 ' 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72132 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72132 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72132 ']' 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.070 12:48:06 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.070 12:48:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.332 [2024-11-20 12:48:06.607043] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:41.332 [2024-11-20 12:48:06.607448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72132 ] 00:15:41.332 [2024-11-20 12:48:06.771493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.594 [2024-11-20 12:48:06.897493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.168 12:48:07 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.168 12:48:07 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:42.168 12:48:07 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:42.168 12:48:07 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:42.168 12:48:07 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:42.168 12:48:07 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:42.168 12:48:07 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:42.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:43.314 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:43.314 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:43.314 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:43.314 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:43.314 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:43.314 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:43.315 nvme0n1 00:15:43.315 nvme0n2 00:15:43.315 nvme0n3 00:15:43.315 nvme1n1 00:15:43.315 nvme2n1 00:15:43.315 nvme3n1 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.315 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:43.315 12:48:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.578 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:43.578 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "97f66f30-cd64-4dd8-b681-6eda3170a677"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "97f66f30-cd64-4dd8-b681-6eda3170a677",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e76972a3-1529-4e31-83be-955c438c3d5e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e76972a3-1529-4e31-83be-955c438c3d5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b8b5cbda-47a7-4164-b137-6a855e536e71"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8b5cbda-47a7-4164-b137-6a855e536e71",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "16a0d49c-816f-47c1-a9f4-ab4f78b4f9fb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "16a0d49c-816f-47c1-a9f4-ab4f78b4f9fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e263d508-f00f-44fd-b0f8-f183f2258461"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e263d508-f00f-44fd-b0f8-f183f2258461",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6e49e19e-1716-4995-9ffd-6f05a1e1cf7e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6e49e19e-1716-4995-9ffd-6f05a1e1cf7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:43.578 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:43.578 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:43.578 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:43.578 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:43.578 12:48:08 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 72132 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72132 ']' 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72132 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72132 00:15:43.578 killing process with pid 72132 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72132' 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72132 00:15:43.578 12:48:08 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72132 00:15:45.506 12:48:10 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:45.506 12:48:10 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:45.506 12:48:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:45.506 12:48:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.506 12:48:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 ************************************ 00:15:45.506 START TEST bdev_hello_world 00:15:45.506 ************************************ 00:15:45.506 12:48:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:45.506 [2024-11-20 12:48:10.649994] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:45.506 [2024-11-20 12:48:10.650564] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72416 ] 00:15:45.506 [2024-11-20 12:48:10.814219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.506 [2024-11-20 12:48:10.938145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.078 [2024-11-20 12:48:11.338809] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:46.078 [2024-11-20 12:48:11.338872] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:46.078 [2024-11-20 12:48:11.338890] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:46.078 [2024-11-20 12:48:11.341053] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:46.078 [2024-11-20 12:48:11.343534] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:46.078 [2024-11-20 12:48:11.343584] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:46.078 [2024-11-20 12:48:11.344086] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:46.078 00:15:46.078 [2024-11-20 12:48:11.344122] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:46.651 00:15:46.651 real 0m1.535s 00:15:46.651 user 0m1.164s 00:15:46.651 sys 0m0.223s 00:15:46.651 ************************************ 00:15:46.651 END TEST bdev_hello_world 00:15:46.651 ************************************ 00:15:46.651 12:48:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.651 12:48:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:46.913 12:48:12 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:46.913 12:48:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.913 12:48:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.913 12:48:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.913 ************************************ 00:15:46.913 START TEST bdev_bounds 00:15:46.914 ************************************ 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:46.914 Process bdevio pid: 72458 00:15:46.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72458 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72458' 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72458 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72458 ']' 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.914 12:48:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:46.914 [2024-11-20 12:48:12.254052] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:46.914 [2024-11-20 12:48:12.254199] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72458 ] 00:15:46.914 [2024-11-20 12:48:12.419256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.175 [2024-11-20 12:48:12.543535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.175 [2024-11-20 12:48:12.543856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.175 [2024-11-20 12:48:12.543904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.749 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.749 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:47.749 12:48:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:47.749 I/O targets: 00:15:47.749 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:47.749 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:47.749 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:47.749 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:47.749 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:47.749 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:47.749 00:15:47.749 00:15:47.749 CUnit - A unit testing framework for C - Version 2.1-3 00:15:47.749 http://cunit.sourceforge.net/ 00:15:47.749 00:15:47.749 00:15:47.749 Suite: bdevio tests on: nvme3n1 00:15:47.749 Test: blockdev write read block ...passed 00:15:47.749 Test: blockdev write zeroes read block ...passed 00:15:47.749 Test: blockdev write zeroes read no split ...passed 00:15:47.749 Test: blockdev write zeroes read split ...passed 00:15:47.749 Test: blockdev write zeroes read split partial ...passed 00:15:47.749 Test: blockdev reset ...passed 00:15:47.749 Test: blockdev write read 8 blocks ...passed 00:15:47.749 Test: blockdev write read size > 128k ...passed 00:15:47.749 Test: blockdev write read invalid size ...passed 00:15:48.011 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.011 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.011 Test: blockdev write read max offset ...passed 00:15:48.011 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.011 Test: blockdev writev readv 8 blocks ...passed 00:15:48.011 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.011 Test: blockdev writev readv block ...passed 00:15:48.011 Test: blockdev writev readv size > 128k ...passed 00:15:48.011 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.011 Test: blockdev comparev and writev ...passed 00:15:48.011 Test: blockdev nvme passthru rw ...passed 00:15:48.011 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.011 Test: blockdev nvme admin passthru ...passed 00:15:48.011 Test: blockdev copy ...passed 00:15:48.011 Suite: bdevio tests on: nvme2n1 00:15:48.011 Test: blockdev write read block ...passed 00:15:48.011 Test: blockdev write zeroes read block ...passed 00:15:48.012 Test: blockdev write zeroes read no split ...passed 00:15:48.012 Test: blockdev write zeroes read split ...passed 00:15:48.012 Test: blockdev write zeroes read split partial ...passed 00:15:48.012 Test: blockdev reset ...passed 00:15:48.012 Test: blockdev write read 8 blocks ...passed 00:15:48.012 Test: blockdev write read size > 128k ...passed 00:15:48.012 Test: blockdev write read invalid size ...passed 00:15:48.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.012 Test: blockdev write read max offset ...passed 00:15:48.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.012 Test: blockdev writev readv 8 blocks ...passed 00:15:48.012 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.012 Test: blockdev writev readv block ...passed 00:15:48.012 Test: blockdev writev readv size > 128k ...passed 00:15:48.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.012 Test: blockdev comparev and writev ...passed 00:15:48.012 Test: blockdev nvme passthru rw ...passed 00:15:48.012 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.012 Test: blockdev nvme admin passthru ...passed 00:15:48.012 Test: blockdev copy ...passed 00:15:48.012 Suite: bdevio tests on: nvme1n1 00:15:48.012 Test: blockdev write read block ...passed 00:15:48.012 Test: blockdev write zeroes read block ...passed 00:15:48.012 Test: blockdev write zeroes read no split ...passed 00:15:48.012 Test: blockdev write zeroes read split ...passed 00:15:48.012 Test: blockdev write zeroes read split partial ...passed 00:15:48.012 Test: blockdev reset ...passed 00:15:48.012 Test: blockdev write read 8 blocks ...passed 00:15:48.012 Test: blockdev write read size > 128k ...passed 00:15:48.012 Test: blockdev write read invalid size ...passed 00:15:48.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.012 Test: blockdev write read max offset ...passed 00:15:48.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.012 Test: blockdev writev readv 8 blocks ...passed 00:15:48.012 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.012 Test: blockdev writev readv block ...passed 00:15:48.012 Test: blockdev writev readv size > 128k ...passed 00:15:48.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.012 Test: blockdev comparev and writev ...passed 00:15:48.012 Test: blockdev nvme passthru rw ...passed 00:15:48.012 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.012 Test: blockdev nvme admin passthru ...passed 00:15:48.012 Test: blockdev copy ...passed 00:15:48.012 Suite: bdevio tests on: nvme0n3 00:15:48.012 Test: blockdev write read block ...passed 00:15:48.012 Test: blockdev write zeroes read block ...passed 00:15:48.012 Test: blockdev write zeroes read no split ...passed 00:15:48.012 Test: blockdev write zeroes read split ...passed 00:15:48.012 Test: blockdev write zeroes read split partial ...passed 00:15:48.012 Test: blockdev reset ...passed 00:15:48.012 Test: blockdev write read 8 blocks ...passed 00:15:48.012 Test: blockdev write read size > 128k ...passed 00:15:48.012 Test: blockdev write read invalid size ...passed 00:15:48.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.012 Test: blockdev write read max offset ...passed 00:15:48.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.012 Test: blockdev writev readv 8 blocks ...passed 00:15:48.012 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.012 Test: blockdev writev readv block ...passed 00:15:48.012 Test: blockdev writev readv size > 128k ...passed 00:15:48.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.274 Test: blockdev comparev and writev ...passed 00:15:48.274 Test: blockdev nvme passthru rw ...passed 00:15:48.274 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.274 Test: blockdev nvme admin passthru ...passed 00:15:48.274 Test: blockdev copy ...passed 00:15:48.274 Suite: bdevio tests on: nvme0n2 00:15:48.274 Test: blockdev write read block ...passed 00:15:48.274 Test: blockdev write zeroes read block ...passed 00:15:48.274 Test: blockdev write zeroes read no split ...passed 00:15:48.274 Test: blockdev write zeroes read split ...passed 00:15:48.274 Test: blockdev write zeroes read split partial ...passed 00:15:48.274 Test: blockdev reset ...passed 00:15:48.274 Test: blockdev write read 8 blocks ...passed 00:15:48.274 Test: blockdev write read size > 128k ...passed 00:15:48.274 Test: blockdev write read invalid size ...passed 00:15:48.274 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.274 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.274 Test: blockdev write read max offset ...passed 00:15:48.274 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.274 Test: blockdev writev readv 8 blocks ...passed 00:15:48.274 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.274 Test: blockdev writev readv block ...passed 00:15:48.274 Test: blockdev writev readv size > 128k ...passed 00:15:48.274 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.274 Test: blockdev comparev and writev ...passed 00:15:48.274 Test: blockdev nvme passthru rw ...passed 00:15:48.274 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.274 Test: blockdev nvme admin passthru ...passed 00:15:48.274 Test: blockdev copy ...passed 00:15:48.274 Suite: bdevio tests on: nvme0n1 00:15:48.274 Test: blockdev write read block ...passed 00:15:48.274 Test: blockdev write zeroes read block ...passed 00:15:48.274 Test: blockdev write zeroes read no split ...passed 00:15:48.274 Test: blockdev write zeroes read split ...passed 00:15:48.274 Test: blockdev write zeroes read split partial ...passed 00:15:48.274 Test: blockdev reset ...passed 00:15:48.274 Test: blockdev write read 8 blocks ...passed 00:15:48.274 Test: blockdev write read size > 128k ...passed 00:15:48.274 Test: blockdev write read invalid size ...passed 00:15:48.274 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.274 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.274 Test: blockdev write read max offset ...passed 00:15:48.274 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.274 Test: blockdev writev readv 8 blocks ...passed 00:15:48.274 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.274 Test: blockdev writev readv block ...passed 00:15:48.274 Test: blockdev writev readv size > 128k ...passed 00:15:48.274 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.274 Test: blockdev comparev and writev ...passed 00:15:48.274 Test: blockdev nvme passthru rw ...passed 00:15:48.274 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.274 Test: blockdev nvme admin passthru ...passed 00:15:48.274 Test: blockdev copy ...passed 00:15:48.274 00:15:48.274 Run Summary: Type Total Ran Passed Failed Inactive 00:15:48.274 suites 6 6 n/a 0 0 00:15:48.274 tests 138 138 138 0 0 00:15:48.274 asserts 780 780 780 0 n/a 00:15:48.274 00:15:48.274 Elapsed time = 1.222 seconds 00:15:48.274 0 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72458 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72458 ']' 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72458 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72458 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72458' 00:15:48.274 killing process with pid 72458 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72458 00:15:48.274 12:48:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72458 00:15:49.219 12:48:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:49.219 00:15:49.219 real 0m2.319s 00:15:49.219 user 0m5.618s 00:15:49.219 sys 0m0.359s 00:15:49.219 12:48:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.219 12:48:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 ************************************ 00:15:49.219 END TEST bdev_bounds 00:15:49.219 ************************************ 00:15:49.219 12:48:14 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:49.219 12:48:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:49.219 12:48:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.219 12:48:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 ************************************ 00:15:49.219 START TEST bdev_nbd 00:15:49.219 ************************************ 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72513 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72513 /var/tmp/spdk-nbd.sock 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72513 ']' 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:49.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.219 12:48:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:49.219 [2024-11-20 12:48:14.640849] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:15:49.219 [2024-11-20 12:48:14.640979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.480 [2024-11-20 12:48:14.803055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.480 [2024-11-20 12:48:14.878014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.053 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.314 1+0 records in 00:15:50.314 1+0 records out 00:15:50.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487479 s, 8.4 MB/s 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.314 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.576 1+0 records in 00:15:50.576 1+0 records out 00:15:50.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467637 s, 8.8 MB/s 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.576 12:48:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.838 1+0 records in 00:15:50.838 1+0 records out 00:15:50.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363293 s, 11.3 MB/s 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.838 1+0 records in 00:15:50.838 1+0 records out 00:15:50.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440201 s, 9.3 MB/s 00:15:50.838 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.100 1+0 records in 00:15:51.100 1+0 records out 00:15:51.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550617 s, 7.4 MB/s 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.100 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.362 1+0 records in 00:15:51.362 1+0 records out 00:15:51.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413317 s, 9.9 MB/s 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.362 12:48:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd0", 00:15:51.646 "bdev_name": "nvme0n1" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd1", 00:15:51.646 "bdev_name": "nvme0n2" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd2", 00:15:51.646 "bdev_name": "nvme0n3" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd3", 00:15:51.646 "bdev_name": "nvme1n1" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd4", 00:15:51.646 "bdev_name": "nvme2n1" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd5", 00:15:51.646 "bdev_name": "nvme3n1" 00:15:51.646 } 00:15:51.646 ]' 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd0", 00:15:51.646 "bdev_name": "nvme0n1" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd1", 00:15:51.646 "bdev_name": "nvme0n2" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd2", 00:15:51.646 "bdev_name": "nvme0n3" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd3", 00:15:51.646 "bdev_name": "nvme1n1" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd4", 00:15:51.646 "bdev_name": "nvme2n1" 00:15:51.646 }, 00:15:51.646 { 00:15:51.646 "nbd_device": "/dev/nbd5", 00:15:51.646 "bdev_name": "nvme3n1" 00:15:51.646 } 00:15:51.646 ]' 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.646 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.929 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.189 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:52.450 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:52.450 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:52.450 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:52.450 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.450 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.450 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:52.451 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.451 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.451 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.451 12:48:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.712 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.973 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:53.235 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.236 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:53.236 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.236 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:53.236 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.236 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:53.236 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:53.497 /dev/nbd0 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.497 1+0 records in 00:15:53.497 1+0 records out 00:15:53.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543619 s, 7.5 MB/s 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:53.497 12:48:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:53.497 /dev/nbd1 00:15:53.497 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:53.497 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:53.497 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:53.497 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:53.497 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.497 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.497 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.758 1+0 records in 00:15:53.758 1+0 records out 00:15:53.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409176 s, 10.0 MB/s 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:53.758 /dev/nbd10 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.758 1+0 records in 00:15:53.758 1+0 records out 00:15:53.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478698 s, 8.6 MB/s 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:53.758 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:54.019 /dev/nbd11 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.019 1+0 records in 00:15:54.019 1+0 records out 00:15:54.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423959 s, 9.7 MB/s 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.019 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:54.279 /dev/nbd12 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.279 1+0 records in 00:15:54.279 1+0 records out 00:15:54.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628398 s, 6.5 MB/s 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.279 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:54.540 /dev/nbd13 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.540 1+0 records in 00:15:54.540 1+0 records out 00:15:54.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617635 s, 6.6 MB/s 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.540 12:48:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:54.805 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd0", 00:15:54.805 "bdev_name": "nvme0n1" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd1", 00:15:54.805 "bdev_name": "nvme0n2" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd10", 00:15:54.805 "bdev_name": "nvme0n3" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd11", 00:15:54.805 "bdev_name": "nvme1n1" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd12", 00:15:54.805 "bdev_name": "nvme2n1" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd13", 00:15:54.805 "bdev_name": "nvme3n1" 00:15:54.805 } 00:15:54.805 ]' 00:15:54.805 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd0", 00:15:54.805 "bdev_name": "nvme0n1" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd1", 00:15:54.805 "bdev_name": "nvme0n2" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd10", 00:15:54.805 "bdev_name": "nvme0n3" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd11", 00:15:54.805 "bdev_name": "nvme1n1" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd12", 00:15:54.805 "bdev_name": "nvme2n1" 00:15:54.805 }, 00:15:54.805 { 00:15:54.805 "nbd_device": "/dev/nbd13", 00:15:54.805 "bdev_name": "nvme3n1" 00:15:54.805 } 00:15:54.805 ]' 00:15:54.805 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:54.805 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:54.805 /dev/nbd1 00:15:54.805 /dev/nbd10 00:15:54.805 /dev/nbd11 00:15:54.805 /dev/nbd12 00:15:54.805 /dev/nbd13' 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:54.806 /dev/nbd1 00:15:54.806 /dev/nbd10 00:15:54.806 /dev/nbd11 00:15:54.806 /dev/nbd12 00:15:54.806 /dev/nbd13' 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:54.806 256+0 records in 00:15:54.806 256+0 records out 00:15:54.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00904952 s, 116 MB/s 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:54.806 256+0 records in 00:15:54.806 256+0 records out 00:15:54.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.061853 s, 17.0 MB/s 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:54.806 256+0 records in 00:15:54.806 256+0 records out 00:15:54.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0608963 s, 17.2 MB/s 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:54.806 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:55.068 256+0 records in 00:15:55.068 256+0 records out 00:15:55.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0566768 s, 18.5 MB/s 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:55.068 256+0 records in 00:15:55.068 256+0 records out 00:15:55.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0549425 s, 19.1 MB/s 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:55.068 256+0 records in 00:15:55.068 256+0 records out 00:15:55.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.065516 s, 16.0 MB/s 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:55.068 256+0 records in 00:15:55.068 256+0 records out 00:15:55.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0549744 s, 19.1 MB/s 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:55.068 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:55.330 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:55.330 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:55.330 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.331 12:48:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.592 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.854 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:56.115 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:56.115 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:56.115 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:56.115 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.115 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.116 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:56.116 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.116 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.116 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.116 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.378 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.639 12:48:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:56.639 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:56.639 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:56.639 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:56.900 malloc_lvol_verify 00:15:56.900 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:57.162 b7e0f4e4-821b-429e-b823-2f577d5ed5e6 00:15:57.162 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:57.162 131274cf-0884-41ce-b4d9-e5744ad3c1ec 00:15:57.162 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:57.423 /dev/nbd0 00:15:57.423 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:57.423 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:57.423 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:57.423 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:57.423 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:57.423 mke2fs 1.47.0 (5-Feb-2023) 00:15:57.423 Discarding device blocks: 0/4096 done 00:15:57.423 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:57.424 00:15:57.424 Allocating group tables: 0/1 done 00:15:57.424 Writing inode tables: 0/1 done 00:15:57.424 Creating journal (1024 blocks): done 00:15:57.424 Writing superblocks and filesystem accounting information: 0/1 done 00:15:57.424 00:15:57.424 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:57.424 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.424 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:57.424 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.424 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:57.424 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.424 12:48:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72513 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72513 ']' 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72513 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72513 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.685 killing process with pid 72513 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72513' 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72513 00:15:57.685 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72513 00:15:58.258 12:48:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:58.258 00:15:58.258 real 0m9.164s 00:15:58.258 user 0m12.927s 00:15:58.258 sys 0m3.197s 00:15:58.258 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.258 ************************************ 00:15:58.258 END TEST bdev_nbd 00:15:58.258 ************************************ 00:15:58.258 12:48:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:58.520 12:48:23 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:58.520 12:48:23 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:58.520 12:48:23 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:58.520 12:48:23 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:58.520 12:48:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.520 12:48:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.520 12:48:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.520 ************************************ 00:15:58.520 START TEST bdev_fio 00:15:58.520 ************************************ 00:15:58.520 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.520 12:48:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:58.520 ************************************ 00:15:58.520 START TEST bdev_fio_rw_verify 00:15:58.520 ************************************ 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:58.521 12:48:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:58.781 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:58.781 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:58.781 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:58.782 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:58.782 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:58.782 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:58.782 fio-3.35 00:15:58.782 Starting 6 threads 00:16:11.012 00:16:11.012 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72905: Wed Nov 20 12:48:34 2024 00:16:11.012 read: IOPS=19.7k, BW=77.1MiB/s (80.8MB/s)(771MiB/10003msec) 00:16:11.012 slat (usec): min=2, max=3263, avg= 5.47, stdev=17.90 00:16:11.012 clat (usec): min=76, max=8081, avg=960.34, stdev=736.60 00:16:11.012 lat (usec): min=80, max=8097, avg=965.81, stdev=737.51 00:16:11.012 clat percentiles (usec): 00:16:11.012 | 50.000th=[ 725], 99.000th=[ 3458], 99.900th=[ 4883], 99.990th=[ 7504], 00:16:11.012 | 99.999th=[ 8094] 00:16:11.012 write: IOPS=20.1k, BW=78.5MiB/s (82.3MB/s)(785MiB/10003msec); 0 zone resets 00:16:11.012 slat (usec): min=9, max=4711, avg=34.46, stdev=120.87 00:16:11.012 clat (usec): min=67, max=13397, avg=1159.99, stdev=820.78 00:16:11.012 lat (usec): min=80, max=13465, avg=1194.45, stdev=836.17 00:16:11.012 clat percentiles (usec): 00:16:11.012 | 50.000th=[ 914], 99.000th=[ 3785], 99.900th=[ 5342], 99.990th=[ 6456], 00:16:11.012 | 99.999th=[13304] 00:16:11.012 bw ( KiB/s): min=49044, max=143433, per=100.00%, avg=81844.53, stdev=5499.69, samples=114 00:16:11.012 iops : min=12258, max=35858, avg=20460.05, stdev=1374.99, samples=114 00:16:11.012 lat (usec) : 100=0.03%, 250=6.54%, 500=19.33%, 750=20.19%, 1000=12.89% 00:16:11.012 lat (msec) : 2=28.81%, 4=11.65%, 10=0.54%, 20=0.01% 00:16:11.012 cpu : usr=42.95%, sys=32.95%, ctx=6470, majf=0, minf=18357 00:16:11.012 IO depths : 1=11.6%, 2=24.1%, 4=50.9%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.012 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.012 issued rwts: total=197327,200915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.012 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:11.012 00:16:11.012 Run status group 0 (all jobs): 00:16:11.012 READ: bw=77.1MiB/s (80.8MB/s), 77.1MiB/s-77.1MiB/s (80.8MB/s-80.8MB/s), io=771MiB (808MB), run=10003-10003msec 00:16:11.012 WRITE: bw=78.5MiB/s (82.3MB/s), 78.5MiB/s-78.5MiB/s (82.3MB/s-82.3MB/s), io=785MiB (823MB), run=10003-10003msec 00:16:11.012 ----------------------------------------------------- 00:16:11.012 Suppressions used: 00:16:11.012 count bytes template 00:16:11.012 6 48 /usr/src/fio/parse.c 00:16:11.012 3453 331488 /usr/src/fio/iolog.c 00:16:11.012 1 8 libtcmalloc_minimal.so 00:16:11.012 1 904 libcrypto.so 00:16:11.012 ----------------------------------------------------- 00:16:11.012 00:16:11.012 00:16:11.012 real 0m11.856s 00:16:11.012 user 0m27.242s 00:16:11.012 sys 0m20.038s 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.012 ************************************ 00:16:11.012 END TEST bdev_fio_rw_verify 00:16:11.012 ************************************ 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:11.012 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "97f66f30-cd64-4dd8-b681-6eda3170a677"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "97f66f30-cd64-4dd8-b681-6eda3170a677",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e76972a3-1529-4e31-83be-955c438c3d5e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e76972a3-1529-4e31-83be-955c438c3d5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b8b5cbda-47a7-4164-b137-6a855e536e71"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8b5cbda-47a7-4164-b137-6a855e536e71",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "16a0d49c-816f-47c1-a9f4-ab4f78b4f9fb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "16a0d49c-816f-47c1-a9f4-ab4f78b4f9fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e263d508-f00f-44fd-b0f8-f183f2258461"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e263d508-f00f-44fd-b0f8-f183f2258461",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6e49e19e-1716-4995-9ffd-6f05a1e1cf7e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6e49e19e-1716-4995-9ffd-6f05a1e1cf7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:11.013 /home/vagrant/spdk_repo/spdk 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:11.013 ************************************ 00:16:11.013 00:16:11.013 real 0m12.028s 00:16:11.013 user 0m27.319s 00:16:11.013 sys 0m20.114s 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.013 12:48:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:11.013 END TEST bdev_fio 00:16:11.013 ************************************ 00:16:11.013 12:48:35 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:11.013 12:48:35 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:11.013 12:48:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:11.013 12:48:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.013 12:48:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.013 ************************************ 00:16:11.013 START TEST bdev_verify 00:16:11.013 ************************************ 00:16:11.013 12:48:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:11.013 [2024-11-20 12:48:35.959906] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:11.013 [2024-11-20 12:48:35.960048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73083 ] 00:16:11.013 [2024-11-20 12:48:36.125562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:11.013 [2024-11-20 12:48:36.247019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.013 [2024-11-20 12:48:36.247182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.274 Running I/O for 5 seconds... 00:16:13.609 24576.00 IOPS, 96.00 MiB/s [2024-11-20T12:48:40.070Z] 24080.00 IOPS, 94.06 MiB/s [2024-11-20T12:48:41.015Z] 23850.67 IOPS, 93.17 MiB/s [2024-11-20T12:48:41.959Z] 23424.00 IOPS, 91.50 MiB/s [2024-11-20T12:48:41.959Z] 23200.00 IOPS, 90.62 MiB/s 00:16:16.440 Latency(us) 00:16:16.440 [2024-11-20T12:48:41.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.440 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x0 length 0x80000 00:16:16.440 nvme0n1 : 5.06 1896.86 7.41 0.00 0.00 67360.30 6956.90 65334.35 00:16:16.440 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x80000 length 0x80000 00:16:16.440 nvme0n1 : 5.03 1906.71 7.45 0.00 0.00 67009.66 6225.92 66544.25 00:16:16.440 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x0 length 0x80000 00:16:16.440 nvme0n2 : 5.03 1856.06 7.25 0.00 0.00 68710.70 11241.94 62511.26 00:16:16.440 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x80000 length 0x80000 00:16:16.440 nvme0n2 : 5.06 1871.85 7.31 0.00 0.00 68134.16 12905.55 58478.28 00:16:16.440 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x0 length 0x80000 00:16:16.440 nvme0n3 : 5.04 1854.50 7.24 0.00 0.00 68653.36 15123.69 62107.96 00:16:16.440 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x80000 length 0x80000 00:16:16.440 nvme0n3 : 5.04 1854.20 7.24 0.00 0.00 68649.32 10737.82 66140.95 00:16:16.440 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x0 length 0x20000 00:16:16.440 nvme1n1 : 5.06 1870.16 7.31 0.00 0.00 67961.55 4486.70 64931.05 00:16:16.440 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x20000 length 0x20000 00:16:16.440 nvme1n1 : 5.05 1876.33 7.33 0.00 0.00 67713.10 6553.60 64124.46 00:16:16.440 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x0 length 0xbd0bd 00:16:16.440 nvme2n1 : 5.06 2582.77 10.09 0.00 0.00 49096.06 5419.32 57671.68 00:16:16.440 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:16.440 nvme2n1 : 5.06 2458.34 9.60 0.00 0.00 51524.59 5318.50 58881.58 00:16:16.440 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0x0 length 0xa0000 00:16:16.440 nvme3n1 : 5.24 1636.84 6.39 0.00 0.00 77427.44 1506.07 303280.44 00:16:16.440 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:16.440 Verification LBA range: start 0xa0000 length 0xa0000 00:16:16.440 nvme3n1 : 5.24 1246.04 4.87 0.00 0.00 101490.99 1606.89 374260.97 00:16:16.440 [2024-11-20T12:48:41.959Z] =================================================================================================================== 00:16:16.440 [2024-11-20T12:48:41.959Z] Total : 22910.66 89.49 0.00 0.00 66700.60 1506.07 374260.97 00:16:17.383 00:16:17.383 real 0m6.863s 00:16:17.383 user 0m10.978s 00:16:17.383 sys 0m1.552s 00:16:17.383 ************************************ 00:16:17.383 12:48:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.383 12:48:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:17.383 END TEST bdev_verify 00:16:17.383 ************************************ 00:16:17.383 12:48:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:17.383 12:48:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:17.383 12:48:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.383 12:48:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.383 ************************************ 00:16:17.383 START TEST bdev_verify_big_io 00:16:17.383 ************************************ 00:16:17.383 12:48:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:17.383 [2024-11-20 12:48:42.894540] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:17.383 [2024-11-20 12:48:42.894701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73187 ] 00:16:17.644 [2024-11-20 12:48:43.070845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:17.906 [2024-11-20 12:48:43.191843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.906 [2024-11-20 12:48:43.191878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.478 Running I/O for 5 seconds... 00:16:23.579 1197.00 IOPS, 74.81 MiB/s [2024-11-20T12:48:50.060Z] 2294.00 IOPS, 143.38 MiB/s [2024-11-20T12:48:50.060Z] 3084.00 IOPS, 192.75 MiB/s 00:16:24.541 Latency(us) 00:16:24.541 [2024-11-20T12:48:50.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.541 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x0 length 0x8000 00:16:24.541 nvme0n1 : 5.69 134.90 8.43 0.00 0.00 911709.74 166158.97 1045349.61 00:16:24.541 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x8000 length 0x8000 00:16:24.541 nvme0n1 : 5.91 130.00 8.13 0.00 0.00 941051.54 26214.40 1051802.39 00:16:24.541 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x0 length 0x8000 00:16:24.541 nvme0n2 : 5.97 117.96 7.37 0.00 0.00 998637.45 52832.10 1677721.60 00:16:24.541 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x8000 length 0x8000 00:16:24.541 nvme0n2 : 5.92 137.70 8.61 0.00 0.00 878774.48 77030.01 1845493.76 00:16:24.541 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x0 length 0x8000 00:16:24.541 nvme0n3 : 5.85 87.52 5.47 0.00 0.00 1303448.02 209715.20 2787598.97 00:16:24.541 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x8000 length 0x8000 00:16:24.541 nvme0n3 : 5.91 127.20 7.95 0.00 0.00 891969.23 4285.05 1161499.57 00:16:24.541 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x0 length 0x2000 00:16:24.541 nvme1n1 : 5.97 128.58 8.04 0.00 0.00 882095.66 101227.91 1910021.51 00:16:24.541 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x2000 length 0x2000 00:16:24.541 nvme1n1 : 5.92 126.93 7.93 0.00 0.00 896392.04 9124.63 1264743.98 00:16:24.541 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x0 length 0xbd0b 00:16:24.541 nvme2n1 : 5.99 212.03 13.25 0.00 0.00 522117.25 7511.43 1006632.96 00:16:24.541 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:24.541 nvme2n1 : 5.94 169.81 10.61 0.00 0.00 647047.07 5595.77 1619646.62 00:16:24.541 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0x0 length 0xa000 00:16:24.541 nvme3n1 : 5.98 149.85 9.37 0.00 0.00 715190.04 11191.53 909841.33 00:16:24.541 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.541 Verification LBA range: start 0xa000 length 0xa000 00:16:24.541 nvme3n1 : 5.93 115.98 7.25 0.00 0.00 918747.82 10485.76 2387526.89 00:16:24.541 [2024-11-20T12:48:50.060Z] =================================================================================================================== 00:16:24.541 [2024-11-20T12:48:50.060Z] Total : 1638.46 102.40 0.00 0.00 837064.35 4285.05 2787598.97 00:16:25.485 00:16:25.485 real 0m7.897s 00:16:25.485 user 0m14.377s 00:16:25.485 sys 0m0.491s 00:16:25.485 12:48:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.485 ************************************ 00:16:25.485 END TEST bdev_verify_big_io 00:16:25.485 ************************************ 00:16:25.485 12:48:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.485 12:48:50 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:25.485 12:48:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:25.485 12:48:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.485 12:48:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:25.485 ************************************ 00:16:25.485 START TEST bdev_write_zeroes 00:16:25.485 ************************************ 00:16:25.485 12:48:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:25.485 [2024-11-20 12:48:50.857009] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:25.485 [2024-11-20 12:48:50.857156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73291 ] 00:16:25.747 [2024-11-20 12:48:51.019452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.747 [2024-11-20 12:48:51.144688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.319 Running I/O for 1 seconds... 00:16:27.264 72800.00 IOPS, 284.38 MiB/s 00:16:27.264 Latency(us) 00:16:27.264 [2024-11-20T12:48:52.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.264 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.264 nvme0n1 : 1.02 11975.45 46.78 0.00 0.00 10678.08 7612.26 23592.96 00:16:27.264 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.264 nvme0n2 : 1.02 11961.37 46.72 0.00 0.00 10681.49 7612.26 23592.96 00:16:27.264 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.264 nvme0n3 : 1.02 11947.72 46.67 0.00 0.00 10684.09 7612.26 23592.96 00:16:27.264 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.264 nvme1n1 : 1.02 11934.36 46.62 0.00 0.00 10687.80 7763.50 23592.96 00:16:27.264 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.264 nvme2n1 : 1.02 12465.15 48.69 0.00 0.00 10223.89 3906.95 16636.06 00:16:27.264 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.264 nvme3n1 : 1.02 12044.84 47.05 0.00 0.00 10511.20 4436.28 24298.73 00:16:27.264 [2024-11-20T12:48:52.783Z] =================================================================================================================== 00:16:27.264 [2024-11-20T12:48:52.783Z] Total : 72328.88 282.53 0.00 0.00 10574.72 3906.95 24298.73 00:16:28.209 00:16:28.209 real 0m2.603s 00:16:28.209 user 0m1.903s 00:16:28.209 sys 0m0.500s 00:16:28.209 12:48:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.209 12:48:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:28.209 ************************************ 00:16:28.209 END TEST bdev_write_zeroes 00:16:28.209 ************************************ 00:16:28.209 12:48:53 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.209 12:48:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:28.209 12:48:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.209 12:48:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.209 ************************************ 00:16:28.209 START TEST bdev_json_nonenclosed 00:16:28.209 ************************************ 00:16:28.209 12:48:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.209 [2024-11-20 12:48:53.537819] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:28.209 [2024-11-20 12:48:53.537969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73343 ] 00:16:28.209 [2024-11-20 12:48:53.702793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.471 [2024-11-20 12:48:53.824006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.471 [2024-11-20 12:48:53.824106] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:28.471 [2024-11-20 12:48:53.824125] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:28.471 [2024-11-20 12:48:53.824135] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:28.733 00:16:28.733 real 0m0.556s 00:16:28.733 user 0m0.331s 00:16:28.733 sys 0m0.118s 00:16:28.733 12:48:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.733 ************************************ 00:16:28.733 END TEST bdev_json_nonenclosed 00:16:28.733 ************************************ 00:16:28.733 12:48:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:28.733 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.733 12:48:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:28.733 12:48:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.733 12:48:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.733 ************************************ 00:16:28.733 START TEST bdev_json_nonarray 00:16:28.733 ************************************ 00:16:28.733 12:48:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.733 [2024-11-20 12:48:54.152003] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:28.733 [2024-11-20 12:48:54.152152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73371 ] 00:16:28.994 [2024-11-20 12:48:54.317006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.994 [2024-11-20 12:48:54.438056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.994 [2024-11-20 12:48:54.438166] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:28.994 [2024-11-20 12:48:54.438190] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:28.994 [2024-11-20 12:48:54.438201] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:29.256 00:16:29.256 real 0m0.550s 00:16:29.256 user 0m0.336s 00:16:29.256 sys 0m0.108s 00:16:29.256 12:48:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.256 ************************************ 00:16:29.256 END TEST bdev_json_nonarray 00:16:29.256 ************************************ 00:16:29.256 12:48:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:29.256 12:48:54 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:29.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:38.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.375 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.375 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.375 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.375 00:16:41.375 real 1m0.123s 00:16:41.375 user 1m20.148s 00:16:41.375 sys 0m45.962s 00:16:41.375 12:49:06 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.375 12:49:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.375 ************************************ 00:16:41.375 END TEST blockdev_xnvme 00:16:41.375 ************************************ 00:16:41.375 12:49:06 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:41.375 12:49:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.375 12:49:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.375 12:49:06 -- common/autotest_common.sh@10 -- # set +x 00:16:41.375 ************************************ 00:16:41.375 START TEST ublk 00:16:41.375 ************************************ 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:41.375 * Looking for test storage... 00:16:41.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.375 12:49:06 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.375 12:49:06 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.375 12:49:06 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.375 12:49:06 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.375 12:49:06 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.375 12:49:06 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.375 12:49:06 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.375 12:49:06 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:41.375 12:49:06 ublk -- scripts/common.sh@345 -- # : 1 00:16:41.375 12:49:06 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.375 12:49:06 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.375 12:49:06 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:41.375 12:49:06 ublk -- scripts/common.sh@353 -- # local d=1 00:16:41.375 12:49:06 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.375 12:49:06 ublk -- scripts/common.sh@355 -- # echo 1 00:16:41.375 12:49:06 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.375 12:49:06 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@353 -- # local d=2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.375 12:49:06 ublk -- scripts/common.sh@355 -- # echo 2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.375 12:49:06 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.375 12:49:06 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.375 12:49:06 ublk -- scripts/common.sh@368 -- # return 0 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:41.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.375 --rc genhtml_branch_coverage=1 00:16:41.375 --rc genhtml_function_coverage=1 00:16:41.375 --rc genhtml_legend=1 00:16:41.375 --rc geninfo_all_blocks=1 00:16:41.375 --rc geninfo_unexecuted_blocks=1 00:16:41.375 00:16:41.375 ' 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:41.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.375 --rc genhtml_branch_coverage=1 00:16:41.375 --rc genhtml_function_coverage=1 00:16:41.375 --rc genhtml_legend=1 00:16:41.375 --rc geninfo_all_blocks=1 00:16:41.375 --rc geninfo_unexecuted_blocks=1 00:16:41.375 00:16:41.375 ' 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:41.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.375 --rc genhtml_branch_coverage=1 00:16:41.375 --rc genhtml_function_coverage=1 00:16:41.375 --rc genhtml_legend=1 00:16:41.375 --rc geninfo_all_blocks=1 00:16:41.375 --rc geninfo_unexecuted_blocks=1 00:16:41.375 00:16:41.375 ' 00:16:41.375 12:49:06 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:41.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.375 --rc genhtml_branch_coverage=1 00:16:41.375 --rc genhtml_function_coverage=1 00:16:41.375 --rc genhtml_legend=1 00:16:41.375 --rc geninfo_all_blocks=1 00:16:41.375 --rc geninfo_unexecuted_blocks=1 00:16:41.375 00:16:41.375 ' 00:16:41.375 12:49:06 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:41.375 12:49:06 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:41.375 12:49:06 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:41.375 12:49:06 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:41.375 12:49:06 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:41.375 12:49:06 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:41.375 12:49:06 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:41.375 12:49:06 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:41.375 12:49:06 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:41.375 12:49:06 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:41.375 12:49:06 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:41.375 12:49:06 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:41.376 12:49:06 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:41.376 12:49:06 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.376 12:49:06 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.376 12:49:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.376 ************************************ 00:16:41.376 START TEST test_save_ublk_config 00:16:41.376 ************************************ 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73667 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73667 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73667 ']' 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.376 12:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:41.376 [2024-11-20 12:49:06.740705] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:41.376 [2024-11-20 12:49:06.740834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73667 ] 00:16:41.636 [2024-11-20 12:49:06.895782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.636 [2024-11-20 12:49:06.989947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:42.206 [2024-11-20 12:49:07.589759] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:42.206 [2024-11-20 12:49:07.590531] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:42.206 malloc0 00:16:42.206 [2024-11-20 12:49:07.645905] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:42.206 [2024-11-20 12:49:07.645976] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:42.206 [2024-11-20 12:49:07.645985] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:42.206 [2024-11-20 12:49:07.645994] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:42.206 [2024-11-20 12:49:07.654825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:42.206 [2024-11-20 12:49:07.654846] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:42.206 [2024-11-20 12:49:07.661764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:42.206 [2024-11-20 12:49:07.661861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:42.206 [2024-11-20 12:49:07.678758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:42.206 0 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.206 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:42.466 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.466 12:49:07 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:42.466 "subsystems": [ 00:16:42.466 { 00:16:42.466 "subsystem": "fsdev", 00:16:42.466 "config": [ 00:16:42.466 { 00:16:42.466 "method": "fsdev_set_opts", 00:16:42.466 "params": { 00:16:42.466 "fsdev_io_pool_size": 65535, 00:16:42.466 "fsdev_io_cache_size": 256 00:16:42.466 } 00:16:42.466 } 00:16:42.466 ] 00:16:42.466 }, 00:16:42.466 { 00:16:42.466 "subsystem": "keyring", 00:16:42.466 "config": [] 00:16:42.466 }, 00:16:42.466 { 00:16:42.466 "subsystem": "iobuf", 00:16:42.466 "config": [ 00:16:42.466 { 00:16:42.466 "method": "iobuf_set_options", 00:16:42.466 "params": { 00:16:42.466 "small_pool_count": 8192, 00:16:42.466 "large_pool_count": 1024, 00:16:42.466 "small_bufsize": 8192, 00:16:42.466 "large_bufsize": 135168, 00:16:42.466 "enable_numa": false 00:16:42.466 } 00:16:42.466 } 00:16:42.466 ] 00:16:42.466 }, 00:16:42.466 { 00:16:42.467 "subsystem": "sock", 00:16:42.467 "config": [ 00:16:42.467 { 00:16:42.467 "method": "sock_set_default_impl", 00:16:42.467 "params": { 00:16:42.467 "impl_name": "posix" 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "sock_impl_set_options", 00:16:42.467 "params": { 00:16:42.467 "impl_name": "ssl", 00:16:42.467 "recv_buf_size": 4096, 00:16:42.467 "send_buf_size": 4096, 00:16:42.467 "enable_recv_pipe": true, 00:16:42.467 "enable_quickack": false, 00:16:42.467 "enable_placement_id": 0, 00:16:42.467 "enable_zerocopy_send_server": true, 00:16:42.467 "enable_zerocopy_send_client": false, 00:16:42.467 "zerocopy_threshold": 0, 00:16:42.467 "tls_version": 0, 00:16:42.467 "enable_ktls": false 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "sock_impl_set_options", 00:16:42.467 "params": { 00:16:42.467 "impl_name": "posix", 00:16:42.467 "recv_buf_size": 2097152, 00:16:42.467 "send_buf_size": 2097152, 00:16:42.467 "enable_recv_pipe": true, 00:16:42.467 "enable_quickack": false, 00:16:42.467 "enable_placement_id": 0, 00:16:42.467 "enable_zerocopy_send_server": true, 00:16:42.467 "enable_zerocopy_send_client": false, 00:16:42.467 "zerocopy_threshold": 0, 00:16:42.467 "tls_version": 0, 00:16:42.467 "enable_ktls": false 00:16:42.467 } 00:16:42.467 } 00:16:42.467 ] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "vmd", 00:16:42.467 "config": [] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "accel", 00:16:42.467 "config": [ 00:16:42.467 { 00:16:42.467 "method": "accel_set_options", 00:16:42.467 "params": { 00:16:42.467 "small_cache_size": 128, 00:16:42.467 "large_cache_size": 16, 00:16:42.467 "task_count": 2048, 00:16:42.467 "sequence_count": 2048, 00:16:42.467 "buf_count": 2048 00:16:42.467 } 00:16:42.467 } 00:16:42.467 ] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "bdev", 00:16:42.467 "config": [ 00:16:42.467 { 00:16:42.467 "method": "bdev_set_options", 00:16:42.467 "params": { 00:16:42.467 "bdev_io_pool_size": 65535, 00:16:42.467 "bdev_io_cache_size": 256, 00:16:42.467 "bdev_auto_examine": true, 00:16:42.467 "iobuf_small_cache_size": 128, 00:16:42.467 "iobuf_large_cache_size": 16 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "bdev_raid_set_options", 00:16:42.467 "params": { 00:16:42.467 "process_window_size_kb": 1024, 00:16:42.467 "process_max_bandwidth_mb_sec": 0 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "bdev_iscsi_set_options", 00:16:42.467 "params": { 00:16:42.467 "timeout_sec": 30 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "bdev_nvme_set_options", 00:16:42.467 "params": { 00:16:42.467 "action_on_timeout": "none", 00:16:42.467 "timeout_us": 0, 00:16:42.467 "timeout_admin_us": 0, 00:16:42.467 "keep_alive_timeout_ms": 10000, 00:16:42.467 "arbitration_burst": 0, 00:16:42.467 "low_priority_weight": 0, 00:16:42.467 "medium_priority_weight": 0, 00:16:42.467 "high_priority_weight": 0, 00:16:42.467 "nvme_adminq_poll_period_us": 10000, 00:16:42.467 "nvme_ioq_poll_period_us": 0, 00:16:42.467 "io_queue_requests": 0, 00:16:42.467 "delay_cmd_submit": true, 00:16:42.467 "transport_retry_count": 4, 00:16:42.467 "bdev_retry_count": 3, 00:16:42.467 "transport_ack_timeout": 0, 00:16:42.467 "ctrlr_loss_timeout_sec": 0, 00:16:42.467 "reconnect_delay_sec": 0, 00:16:42.467 "fast_io_fail_timeout_sec": 0, 00:16:42.467 "disable_auto_failback": false, 00:16:42.467 "generate_uuids": false, 00:16:42.467 "transport_tos": 0, 00:16:42.467 "nvme_error_stat": false, 00:16:42.467 "rdma_srq_size": 0, 00:16:42.467 "io_path_stat": false, 00:16:42.467 "allow_accel_sequence": false, 00:16:42.467 "rdma_max_cq_size": 0, 00:16:42.467 "rdma_cm_event_timeout_ms": 0, 00:16:42.467 "dhchap_digests": [ 00:16:42.467 "sha256", 00:16:42.467 "sha384", 00:16:42.467 "sha512" 00:16:42.467 ], 00:16:42.467 "dhchap_dhgroups": [ 00:16:42.467 "null", 00:16:42.467 "ffdhe2048", 00:16:42.467 "ffdhe3072", 00:16:42.467 "ffdhe4096", 00:16:42.467 "ffdhe6144", 00:16:42.467 "ffdhe8192" 00:16:42.467 ] 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "bdev_nvme_set_hotplug", 00:16:42.467 "params": { 00:16:42.467 "period_us": 100000, 00:16:42.467 "enable": false 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "bdev_malloc_create", 00:16:42.467 "params": { 00:16:42.467 "name": "malloc0", 00:16:42.467 "num_blocks": 8192, 00:16:42.467 "block_size": 4096, 00:16:42.467 "physical_block_size": 4096, 00:16:42.467 "uuid": "db7b3f51-5a66-4ff8-9920-8fcfd03ab9dd", 00:16:42.467 "optimal_io_boundary": 0, 00:16:42.467 "md_size": 0, 00:16:42.467 "dif_type": 0, 00:16:42.467 "dif_is_head_of_md": false, 00:16:42.467 "dif_pi_format": 0 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "bdev_wait_for_examine" 00:16:42.467 } 00:16:42.467 ] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "scsi", 00:16:42.467 "config": null 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "scheduler", 00:16:42.467 "config": [ 00:16:42.467 { 00:16:42.467 "method": "framework_set_scheduler", 00:16:42.467 "params": { 00:16:42.467 "name": "static" 00:16:42.467 } 00:16:42.467 } 00:16:42.467 ] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "vhost_scsi", 00:16:42.467 "config": [] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "vhost_blk", 00:16:42.467 "config": [] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "ublk", 00:16:42.467 "config": [ 00:16:42.467 { 00:16:42.467 "method": "ublk_create_target", 00:16:42.467 "params": { 00:16:42.467 "cpumask": "1" 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "ublk_start_disk", 00:16:42.467 "params": { 00:16:42.467 "bdev_name": "malloc0", 00:16:42.467 "ublk_id": 0, 00:16:42.467 "num_queues": 1, 00:16:42.467 "queue_depth": 128 00:16:42.467 } 00:16:42.467 } 00:16:42.467 ] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "nbd", 00:16:42.467 "config": [] 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "subsystem": "nvmf", 00:16:42.467 "config": [ 00:16:42.467 { 00:16:42.467 "method": "nvmf_set_config", 00:16:42.467 "params": { 00:16:42.467 "discovery_filter": "match_any", 00:16:42.467 "admin_cmd_passthru": { 00:16:42.467 "identify_ctrlr": false 00:16:42.467 }, 00:16:42.467 "dhchap_digests": [ 00:16:42.467 "sha256", 00:16:42.467 "sha384", 00:16:42.467 "sha512" 00:16:42.467 ], 00:16:42.467 "dhchap_dhgroups": [ 00:16:42.467 "null", 00:16:42.467 "ffdhe2048", 00:16:42.467 "ffdhe3072", 00:16:42.467 "ffdhe4096", 00:16:42.467 "ffdhe6144", 00:16:42.467 "ffdhe8192" 00:16:42.467 ] 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "nvmf_set_max_subsystems", 00:16:42.467 "params": { 00:16:42.467 "max_subsystems": 1024 00:16:42.467 } 00:16:42.467 }, 00:16:42.467 { 00:16:42.467 "method": "nvmf_set_crdt", 00:16:42.467 "params": { 00:16:42.467 "crdt1": 0, 00:16:42.467 "crdt2": 0, 00:16:42.467 "crdt3": 0 00:16:42.467 } 00:16:42.467 } 00:16:42.468 ] 00:16:42.468 }, 00:16:42.468 { 00:16:42.468 "subsystem": "iscsi", 00:16:42.468 "config": [ 00:16:42.468 { 00:16:42.468 "method": "iscsi_set_options", 00:16:42.468 "params": { 00:16:42.468 "node_base": "iqn.2016-06.io.spdk", 00:16:42.468 "max_sessions": 128, 00:16:42.468 "max_connections_per_session": 2, 00:16:42.468 "max_queue_depth": 64, 00:16:42.468 "default_time2wait": 2, 00:16:42.468 "default_time2retain": 20, 00:16:42.468 "first_burst_length": 8192, 00:16:42.468 "immediate_data": true, 00:16:42.468 "allow_duplicated_isid": false, 00:16:42.468 "error_recovery_level": 0, 00:16:42.468 "nop_timeout": 60, 00:16:42.468 "nop_in_interval": 30, 00:16:42.468 "disable_chap": false, 00:16:42.468 "require_chap": false, 00:16:42.468 "mutual_chap": false, 00:16:42.468 "chap_group": 0, 00:16:42.468 "max_large_datain_per_connection": 64, 00:16:42.468 "max_r2t_per_connection": 4, 00:16:42.468 "pdu_pool_size": 36864, 00:16:42.468 "immediate_data_pool_size": 16384, 00:16:42.468 "data_out_pool_size": 2048 00:16:42.468 } 00:16:42.468 } 00:16:42.468 ] 00:16:42.468 } 00:16:42.468 ] 00:16:42.468 }' 00:16:42.468 12:49:07 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73667 00:16:42.468 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73667 ']' 00:16:42.468 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73667 00:16:42.468 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:42.468 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.468 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73667 00:16:42.762 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.762 killing process with pid 73667 00:16:42.762 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.762 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73667' 00:16:42.762 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73667 00:16:42.762 12:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73667 00:16:43.699 [2024-11-20 12:49:09.059353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.699 [2024-11-20 12:49:09.097799] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.699 [2024-11-20 12:49:09.097948] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.699 [2024-11-20 12:49:09.106801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.699 [2024-11-20 12:49:09.106863] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:43.699 [2024-11-20 12:49:09.106878] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:43.699 [2024-11-20 12:49:09.106907] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:43.699 [2024-11-20 12:49:09.107061] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73717 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73717 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73717 ']' 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.083 12:49:10 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:45.083 "subsystems": [ 00:16:45.083 { 00:16:45.083 "subsystem": "fsdev", 00:16:45.083 "config": [ 00:16:45.083 { 00:16:45.083 "method": "fsdev_set_opts", 00:16:45.083 "params": { 00:16:45.083 "fsdev_io_pool_size": 65535, 00:16:45.083 "fsdev_io_cache_size": 256 00:16:45.083 } 00:16:45.083 } 00:16:45.083 ] 00:16:45.083 }, 00:16:45.083 { 00:16:45.083 "subsystem": "keyring", 00:16:45.083 "config": [] 00:16:45.083 }, 00:16:45.083 { 00:16:45.083 "subsystem": "iobuf", 00:16:45.083 "config": [ 00:16:45.083 { 00:16:45.083 "method": "iobuf_set_options", 00:16:45.083 "params": { 00:16:45.083 "small_pool_count": 8192, 00:16:45.083 "large_pool_count": 1024, 00:16:45.083 "small_bufsize": 8192, 00:16:45.083 "large_bufsize": 135168, 00:16:45.083 "enable_numa": false 00:16:45.083 } 00:16:45.083 } 00:16:45.083 ] 00:16:45.083 }, 00:16:45.083 { 00:16:45.083 "subsystem": "sock", 00:16:45.083 "config": [ 00:16:45.084 { 00:16:45.084 "method": "sock_set_default_impl", 00:16:45.084 "params": { 00:16:45.084 "impl_name": "posix" 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "sock_impl_set_options", 00:16:45.084 "params": { 00:16:45.084 "impl_name": "ssl", 00:16:45.084 "recv_buf_size": 4096, 00:16:45.084 "send_buf_size": 4096, 00:16:45.084 "enable_recv_pipe": true, 00:16:45.084 "enable_quickack": false, 00:16:45.084 "enable_placement_id": 0, 00:16:45.084 "enable_zerocopy_send_server": true, 00:16:45.084 "enable_zerocopy_send_client": false, 00:16:45.084 "zerocopy_threshold": 0, 00:16:45.084 "tls_version": 0, 00:16:45.084 "enable_ktls": false 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "sock_impl_set_options", 00:16:45.084 "params": { 00:16:45.084 "impl_name": "posix", 00:16:45.084 "recv_buf_size": 2097152, 00:16:45.084 "send_buf_size": 2097152, 00:16:45.084 "enable_recv_pipe": true, 00:16:45.084 "enable_quickack": false, 00:16:45.084 "enable_placement_id": 0, 00:16:45.084 "enable_zerocopy_send_server": true, 00:16:45.084 "enable_zerocopy_send_client": false, 00:16:45.084 "zerocopy_threshold": 0, 00:16:45.084 "tls_version": 0, 00:16:45.084 "enable_ktls": false 00:16:45.084 } 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "vmd", 00:16:45.084 "config": [] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "accel", 00:16:45.084 "config": [ 00:16:45.084 { 00:16:45.084 "method": "accel_set_options", 00:16:45.084 "params": { 00:16:45.084 "small_cache_size": 128, 00:16:45.084 "large_cache_size": 16, 00:16:45.084 "task_count": 2048, 00:16:45.084 "sequence_count": 2048, 00:16:45.084 "buf_count": 2048 00:16:45.084 } 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "bdev", 00:16:45.084 "config": [ 00:16:45.084 { 00:16:45.084 "method": "bdev_set_options", 00:16:45.084 "params": { 00:16:45.084 "bdev_io_pool_size": 65535, 00:16:45.084 "bdev_io_cache_size": 256, 00:16:45.084 "bdev_auto_examine": true, 00:16:45.084 "iobuf_small_cache_size": 128, 00:16:45.084 "iobuf_large_cache_size": 16 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "bdev_raid_set_options", 00:16:45.084 "params": { 00:16:45.084 "process_window_size_kb": 1024, 00:16:45.084 "process_max_bandwidth_mb_sec": 0 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "bdev_iscsi_set_options", 00:16:45.084 "params": { 00:16:45.084 "timeout_sec": 30 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "bdev_nvme_set_options", 00:16:45.084 "params": { 00:16:45.084 "action_on_timeout": "none", 00:16:45.084 "timeout_us": 0, 00:16:45.084 "timeout_admin_us": 0, 00:16:45.084 "keep_alive_timeout_ms": 10000, 00:16:45.084 "arbitration_burst": 0, 00:16:45.084 "low_priority_weight": 0, 00:16:45.084 "medium_priority_weight": 0, 00:16:45.084 "high_priority_weight": 0, 00:16:45.084 "nvme_adminq_poll_period_us": 10000, 00:16:45.084 "nvme_ioq_poll_period_us": 0, 00:16:45.084 "io_queue_requests": 0, 00:16:45.084 "delay_cmd_submit": true, 00:16:45.084 "transport_retry_count": 4, 00:16:45.084 "bdev_retry_count": 3, 00:16:45.084 "transport_ack_timeout": 0, 00:16:45.084 "ctrlr_loss_timeout_sec": 0, 00:16:45.084 "reconnect_delay_sec": 0, 00:16:45.084 "fast_io_fail_timeout_sec": 0, 00:16:45.084 "disable_auto_failback": false, 00:16:45.084 "generate_uuids": false, 00:16:45.084 "transport_tos": 0, 00:16:45.084 "nvme_error_stat": false, 00:16:45.084 "rdma_srq_size": 0, 00:16:45.084 "io_path_stat": false, 00:16:45.084 "allow_accel_sequence": false, 00:16:45.084 "rdma_max_cq_size": 0, 00:16:45.084 "rdma_cm_event_timeout_ms": 0, 00:16:45.084 "dhchap_digests": [ 00:16:45.084 "sha256", 00:16:45.084 "sha384", 00:16:45.084 "sha512" 00:16:45.084 ], 00:16:45.084 "dhchap_dhgroups": [ 00:16:45.084 "null", 00:16:45.084 "ffdhe2048", 00:16:45.084 "ffdhe3072", 00:16:45.084 "ffdhe4096", 00:16:45.084 "ffdhe6144", 00:16:45.084 "ffdhe8192" 00:16:45.084 ] 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "bdev_nvme_set_hotplug", 00:16:45.084 "params": { 00:16:45.084 "period_us": 100000, 00:16:45.084 "enable": false 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "bdev_malloc_create", 00:16:45.084 "params": { 00:16:45.084 "name": "malloc0", 00:16:45.084 "num_blocks": 8192, 00:16:45.084 "block_size": 4096, 00:16:45.084 "physical_block_size": 4096, 00:16:45.084 "uuid": "db7b3f51-5a66-4ff8-9920-8fcfd03ab9dd", 00:16:45.084 "optimal_io_boundary": 0, 00:16:45.084 "md_size": 0, 00:16:45.084 "dif_type": 0, 00:16:45.084 "dif_is_head_of_md": false, 00:16:45.084 "dif_pi_format": 0 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "bdev_wait_for_examine" 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "scsi", 00:16:45.084 "config": null 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "scheduler", 00:16:45.084 "config": [ 00:16:45.084 { 00:16:45.084 "method": "framework_set_scheduler", 00:16:45.084 "params": { 00:16:45.084 "name": "static" 00:16:45.084 } 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "vhost_scsi", 00:16:45.084 "config": [] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "vhost_blk", 00:16:45.084 "config": [] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "ublk", 00:16:45.084 "config": [ 00:16:45.084 { 00:16:45.084 "method": "ublk_create_target", 00:16:45.084 "params": { 00:16:45.084 "cpumask": "1" 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "ublk_start_disk", 00:16:45.084 "params": { 00:16:45.084 "bdev_name": "malloc0", 00:16:45.084 "ublk_id": 0, 00:16:45.084 "num_queues": 1, 00:16:45.084 "queue_depth": 128 00:16:45.084 } 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "nbd", 00:16:45.084 "config": [] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "nvmf", 00:16:45.084 "config": [ 00:16:45.084 { 00:16:45.084 "method": "nvmf_set_config", 00:16:45.084 "params": { 00:16:45.084 "discovery_filter": "match_any", 00:16:45.084 "admin_cmd_passthru": { 00:16:45.084 "identify_ctrlr": false 00:16:45.084 }, 00:16:45.084 "dhchap_digests": [ 00:16:45.084 "sha256", 00:16:45.084 "sha384", 00:16:45.084 "sha512" 00:16:45.084 ], 00:16:45.084 "dhchap_dhgroups": [ 00:16:45.084 "null", 00:16:45.084 "ffdhe2048", 00:16:45.084 "ffdhe3072", 00:16:45.084 "ffdhe4096", 00:16:45.084 "ffdhe6144", 00:16:45.084 "ffdhe8192" 00:16:45.084 ] 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "nvmf_set_max_subsystems", 00:16:45.084 "params": { 00:16:45.084 "max_subsystems": 1024 00:16:45.084 } 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "method": "nvmf_set_crdt", 00:16:45.084 "params": { 00:16:45.084 "crdt1": 0, 00:16:45.084 "crdt2": 0, 00:16:45.084 "crdt3": 0 00:16:45.084 } 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }, 00:16:45.084 { 00:16:45.084 "subsystem": "iscsi", 00:16:45.084 "config": [ 00:16:45.084 { 00:16:45.084 "method": "iscsi_set_options", 00:16:45.084 "params": { 00:16:45.084 "node_base": "iqn.2016-06.io.spdk", 00:16:45.084 "max_sessions": 128, 00:16:45.084 "max_connections_per_session": 2, 00:16:45.084 "max_queue_depth": 64, 00:16:45.084 "default_time2wait": 2, 00:16:45.084 "default_time2retain": 20, 00:16:45.084 "first_burst_length": 8192, 00:16:45.084 "immediate_data": true, 00:16:45.084 "allow_duplicated_isid": false, 00:16:45.084 "error_recovery_level": 0, 00:16:45.084 "nop_timeout": 60, 00:16:45.084 "nop_in_interval": 30, 00:16:45.084 "disable_chap": false, 00:16:45.084 "require_chap": false, 00:16:45.084 "mutual_chap": false, 00:16:45.084 "chap_group": 0, 00:16:45.084 "max_large_datain_per_connection": 64, 00:16:45.084 "max_r2t_per_connection": 4, 00:16:45.084 "pdu_pool_size": 36864, 00:16:45.084 "immediate_data_pool_size": 16384, 00:16:45.084 "data_out_pool_size": 2048 00:16:45.084 } 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 } 00:16:45.084 ] 00:16:45.084 }' 00:16:45.084 [2024-11-20 12:49:10.402876] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:45.084 [2024-11-20 12:49:10.403299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73717 ] 00:16:45.084 [2024-11-20 12:49:10.557298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.346 [2024-11-20 12:49:10.633548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.916 [2024-11-20 12:49:11.277754] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:45.916 [2024-11-20 12:49:11.278400] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:45.916 [2024-11-20 12:49:11.285851] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:45.916 [2024-11-20 12:49:11.285911] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:45.916 [2024-11-20 12:49:11.285918] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:45.916 [2024-11-20 12:49:11.285924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:45.916 [2024-11-20 12:49:11.294804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:45.916 [2024-11-20 12:49:11.294823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:45.916 [2024-11-20 12:49:11.301760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:45.916 [2024-11-20 12:49:11.301832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:45.916 [2024-11-20 12:49:11.318762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:45.916 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.916 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:45.916 12:49:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:45.916 12:49:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:45.916 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.916 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73717 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73717 ']' 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73717 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73717 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.917 killing process with pid 73717 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73717' 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73717 00:16:45.917 12:49:11 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73717 00:16:47.304 [2024-11-20 12:49:12.482276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.304 [2024-11-20 12:49:12.515764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.304 [2024-11-20 12:49:12.515868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.304 [2024-11-20 12:49:12.526757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.304 [2024-11-20 12:49:12.526806] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:47.305 [2024-11-20 12:49:12.526812] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:47.305 [2024-11-20 12:49:12.526833] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.305 [2024-11-20 12:49:12.526938] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:48.249 12:49:13 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:48.249 00:16:48.249 real 0m7.024s 00:16:48.249 user 0m4.823s 00:16:48.249 sys 0m2.797s 00:16:48.249 12:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.249 12:49:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:48.249 ************************************ 00:16:48.249 END TEST test_save_ublk_config 00:16:48.249 ************************************ 00:16:48.249 12:49:13 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73790 00:16:48.249 12:49:13 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.249 12:49:13 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73790 00:16:48.250 12:49:13 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:48.250 12:49:13 ublk -- common/autotest_common.sh@835 -- # '[' -z 73790 ']' 00:16:48.250 12:49:13 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.250 12:49:13 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.250 12:49:13 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.250 12:49:13 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.250 12:49:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.511 [2024-11-20 12:49:13.800653] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:16:48.512 [2024-11-20 12:49:13.800772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73790 ] 00:16:48.512 [2024-11-20 12:49:13.956992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.773 [2024-11-20 12:49:14.034706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.773 [2024-11-20 12:49:14.034777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.346 12:49:14 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.346 12:49:14 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:49.346 12:49:14 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:49.346 12:49:14 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:49.346 12:49:14 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.346 12:49:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.346 ************************************ 00:16:49.346 START TEST test_create_ublk 00:16:49.346 ************************************ 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:49.346 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.346 [2024-11-20 12:49:14.642755] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:49.346 [2024-11-20 12:49:14.644259] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.346 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:49.346 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.346 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:49.346 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.346 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.346 [2024-11-20 12:49:14.799860] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:49.346 [2024-11-20 12:49:14.800149] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:49.346 [2024-11-20 12:49:14.800160] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:49.346 [2024-11-20 12:49:14.800165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:49.346 [2024-11-20 12:49:14.807773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:49.346 [2024-11-20 12:49:14.807789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:49.346 [2024-11-20 12:49:14.815762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:49.346 [2024-11-20 12:49:14.828805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:49.346 [2024-11-20 12:49:14.858768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:49.607 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:49.608 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.608 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.608 12:49:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:49.608 { 00:16:49.608 "ublk_device": "/dev/ublkb0", 00:16:49.608 "id": 0, 00:16:49.608 "queue_depth": 512, 00:16:49.608 "num_queues": 4, 00:16:49.608 "bdev_name": "Malloc0" 00:16:49.608 } 00:16:49.608 ]' 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:49.608 12:49:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:49.608 12:49:15 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:49.608 12:49:15 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:49.608 12:49:15 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:49.608 12:49:15 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:49.608 12:49:15 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:49.869 fio: verification read phase will never start because write phase uses all of runtime 00:16:49.869 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:49.869 fio-3.35 00:16:49.869 Starting 1 process 00:16:59.873 00:16:59.873 fio_test: (groupid=0, jobs=1): err= 0: pid=73829: Wed Nov 20 12:49:25 2024 00:16:59.873 write: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(703MiB/10002msec); 0 zone resets 00:16:59.873 clat (usec): min=36, max=7847, avg=54.75, stdev=112.60 00:16:59.873 lat (usec): min=36, max=7857, avg=55.20, stdev=112.62 00:16:59.873 clat percentiles (usec): 00:16:59.873 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:16:59.873 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 50], 60.00th=[ 51], 00:16:59.873 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 58], 95.00th=[ 63], 00:16:59.873 | 99.00th=[ 73], 99.50th=[ 80], 99.90th=[ 2442], 99.95th=[ 3359], 00:16:59.873 | 99.99th=[ 3654] 00:16:59.873 bw ( KiB/s): min=33080, max=79640, per=100.00%, avg=72105.84, stdev=10020.68, samples=19 00:16:59.873 iops : min= 8270, max=19910, avg=18026.42, stdev=2505.15, samples=19 00:16:59.873 lat (usec) : 50=54.57%, 100=45.09%, 250=0.14%, 500=0.03%, 750=0.01% 00:16:59.873 lat (usec) : 1000=0.01% 00:16:59.873 lat (msec) : 2=0.04%, 4=0.12%, 10=0.01% 00:16:59.873 cpu : usr=3.44%, sys=14.62%, ctx=180041, majf=0, minf=796 00:16:59.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.873 issued rwts: total=0,180038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.873 00:16:59.873 Run status group 0 (all jobs): 00:16:59.873 WRITE: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=703MiB (737MB), run=10002-10002msec 00:16:59.873 00:16:59.873 Disk stats (read/write): 00:16:59.873 ublkb0: ios=0/178315, merge=0/0, ticks=0/8211, in_queue=8212, util=99.10% 00:16:59.873 12:49:25 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.873 [2024-11-20 12:49:25.265192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.873 [2024-11-20 12:49:25.306792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.873 [2024-11-20 12:49:25.307394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.873 [2024-11-20 12:49:25.315796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.873 [2024-11-20 12:49:25.316023] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:59.873 [2024-11-20 12:49:25.316035] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.873 12:49:25 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.873 [2024-11-20 12:49:25.332823] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:59.873 request: 00:16:59.873 { 00:16:59.873 "ublk_id": 0, 00:16:59.873 "method": "ublk_stop_disk", 00:16:59.873 "req_id": 1 00:16:59.873 } 00:16:59.873 Got JSON-RPC error response 00:16:59.873 response: 00:16:59.873 { 00:16:59.873 "code": -19, 00:16:59.873 "message": "No such device" 00:16:59.873 } 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.873 12:49:25 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.873 [2024-11-20 12:49:25.346816] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:59.873 [2024-11-20 12:49:25.350363] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:59.873 [2024-11-20 12:49:25.350394] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.873 12:49:25 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.873 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.447 12:49:25 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:00.447 12:49:25 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:00.447 00:17:00.447 real 0m11.169s 00:17:00.447 user 0m0.639s 00:17:00.447 sys 0m1.537s 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.447 12:49:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.447 ************************************ 00:17:00.447 END TEST test_create_ublk 00:17:00.447 ************************************ 00:17:00.447 12:49:25 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:00.447 12:49:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.447 12:49:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.447 12:49:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.447 ************************************ 00:17:00.447 START TEST test_create_multi_ublk 00:17:00.447 ************************************ 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.447 [2024-11-20 12:49:25.852752] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:00.447 [2024-11-20 12:49:25.854273] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.447 12:49:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.709 [2024-11-20 12:49:26.056871] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:00.709 [2024-11-20 12:49:26.057165] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:00.709 [2024-11-20 12:49:26.057177] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:00.709 [2024-11-20 12:49:26.057185] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:00.709 [2024-11-20 12:49:26.080761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:00.709 [2024-11-20 12:49:26.080780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:00.709 [2024-11-20 12:49:26.092758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:00.709 [2024-11-20 12:49:26.093251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:00.709 [2024-11-20 12:49:26.132757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.709 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.971 [2024-11-20 12:49:26.352856] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:00.971 [2024-11-20 12:49:26.353149] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:00.971 [2024-11-20 12:49:26.353162] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:00.971 [2024-11-20 12:49:26.353167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:00.971 [2024-11-20 12:49:26.360779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:00.971 [2024-11-20 12:49:26.360795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:00.971 [2024-11-20 12:49:26.368764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:00.971 [2024-11-20 12:49:26.369250] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:00.971 [2024-11-20 12:49:26.385772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.971 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:01.233 [2024-11-20 12:49:26.544840] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:01.233 [2024-11-20 12:49:26.545137] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:01.233 [2024-11-20 12:49:26.545149] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:01.233 [2024-11-20 12:49:26.545156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:01.233 [2024-11-20 12:49:26.552767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:01.233 [2024-11-20 12:49:26.552785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:01.233 [2024-11-20 12:49:26.560756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:01.233 [2024-11-20 12:49:26.561273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:01.233 [2024-11-20 12:49:26.569775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.233 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:01.233 [2024-11-20 12:49:26.728855] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:01.233 [2024-11-20 12:49:26.729145] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:01.233 [2024-11-20 12:49:26.729159] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:01.233 [2024-11-20 12:49:26.729164] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:01.233 [2024-11-20 12:49:26.736786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:01.233 [2024-11-20 12:49:26.736802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:01.233 [2024-11-20 12:49:26.744764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:01.233 [2024-11-20 12:49:26.745251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:01.495 [2024-11-20 12:49:26.751801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:01.495 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.495 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:01.495 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:01.495 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.495 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:01.495 12:49:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.495 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:01.495 { 00:17:01.495 "ublk_device": "/dev/ublkb0", 00:17:01.495 "id": 0, 00:17:01.495 "queue_depth": 512, 00:17:01.495 "num_queues": 4, 00:17:01.495 "bdev_name": "Malloc0" 00:17:01.495 }, 00:17:01.495 { 00:17:01.495 "ublk_device": "/dev/ublkb1", 00:17:01.495 "id": 1, 00:17:01.496 "queue_depth": 512, 00:17:01.496 "num_queues": 4, 00:17:01.496 "bdev_name": "Malloc1" 00:17:01.496 }, 00:17:01.496 { 00:17:01.496 "ublk_device": "/dev/ublkb2", 00:17:01.496 "id": 2, 00:17:01.496 "queue_depth": 512, 00:17:01.496 "num_queues": 4, 00:17:01.496 "bdev_name": "Malloc2" 00:17:01.496 }, 00:17:01.496 { 00:17:01.496 "ublk_device": "/dev/ublkb3", 00:17:01.496 "id": 3, 00:17:01.496 "queue_depth": 512, 00:17:01.496 "num_queues": 4, 00:17:01.496 "bdev_name": "Malloc3" 00:17:01.496 } 00:17:01.496 ]' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:01.496 12:49:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:01.757 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.018 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.018 [2024-11-20 12:49:27.416826] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:02.018 [2024-11-20 12:49:27.460202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:02.019 [2024-11-20 12:49:27.461254] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:02.019 [2024-11-20 12:49:27.467766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:02.019 [2024-11-20 12:49:27.467984] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:02.019 [2024-11-20 12:49:27.467997] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:02.019 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.019 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.019 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:02.019 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.019 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.019 [2024-11-20 12:49:27.481820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:02.019 [2024-11-20 12:49:27.518790] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:02.019 [2024-11-20 12:49:27.519479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:02.019 [2024-11-20 12:49:27.527781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:02.019 [2024-11-20 12:49:27.528003] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:02.019 [2024-11-20 12:49:27.528014] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.281 [2024-11-20 12:49:27.542824] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:02.281 [2024-11-20 12:49:27.593205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:02.281 [2024-11-20 12:49:27.594196] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:02.281 [2024-11-20 12:49:27.598768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:02.281 [2024-11-20 12:49:27.598982] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:02.281 [2024-11-20 12:49:27.598989] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.281 [2024-11-20 12:49:27.614819] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:02.281 [2024-11-20 12:49:27.650789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:02.281 [2024-11-20 12:49:27.651373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:02.281 [2024-11-20 12:49:27.658766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:02.281 [2024-11-20 12:49:27.658982] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:02.281 [2024-11-20 12:49:27.658994] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.281 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:02.542 [2024-11-20 12:49:27.850807] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:02.542 [2024-11-20 12:49:27.854263] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:02.542 [2024-11-20 12:49:27.854291] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:02.542 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:02.542 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.542 12:49:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:02.542 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.542 12:49:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.803 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.803 12:49:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.803 12:49:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:02.803 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.803 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 12:49:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.063 12:49:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:03.063 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.324 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.324 12:49:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.324 12:49:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:03.324 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.324 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:03.585 12:49:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:03.585 00:17:03.585 real 0m3.210s 00:17:03.585 user 0m0.814s 00:17:03.585 sys 0m0.135s 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.585 ************************************ 00:17:03.585 END TEST test_create_multi_ublk 00:17:03.585 12:49:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 ************************************ 00:17:03.585 12:49:29 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:03.585 12:49:29 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:03.585 12:49:29 ublk -- ublk/ublk.sh@130 -- # killprocess 73790 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@954 -- # '[' -z 73790 ']' 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@958 -- # kill -0 73790 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@959 -- # uname 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73790 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.585 killing process with pid 73790 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73790' 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@973 -- # kill 73790 00:17:03.585 12:49:29 ublk -- common/autotest_common.sh@978 -- # wait 73790 00:17:04.156 [2024-11-20 12:49:29.625902] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:04.156 [2024-11-20 12:49:29.625943] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:05.099 00:17:05.099 real 0m23.747s 00:17:05.099 user 0m34.660s 00:17:05.099 sys 0m9.096s 00:17:05.099 12:49:30 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.099 12:49:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.099 ************************************ 00:17:05.099 END TEST ublk 00:17:05.099 ************************************ 00:17:05.099 12:49:30 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:05.099 12:49:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.099 12:49:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.099 12:49:30 -- common/autotest_common.sh@10 -- # set +x 00:17:05.099 ************************************ 00:17:05.099 START TEST ublk_recovery 00:17:05.099 ************************************ 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:05.099 * Looking for test storage... 00:17:05.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.099 12:49:30 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:05.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.099 --rc genhtml_branch_coverage=1 00:17:05.099 --rc genhtml_function_coverage=1 00:17:05.099 --rc genhtml_legend=1 00:17:05.099 --rc geninfo_all_blocks=1 00:17:05.099 --rc geninfo_unexecuted_blocks=1 00:17:05.099 00:17:05.099 ' 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:05.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.099 --rc genhtml_branch_coverage=1 00:17:05.099 --rc genhtml_function_coverage=1 00:17:05.099 --rc genhtml_legend=1 00:17:05.099 --rc geninfo_all_blocks=1 00:17:05.099 --rc geninfo_unexecuted_blocks=1 00:17:05.099 00:17:05.099 ' 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:05.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.099 --rc genhtml_branch_coverage=1 00:17:05.099 --rc genhtml_function_coverage=1 00:17:05.099 --rc genhtml_legend=1 00:17:05.099 --rc geninfo_all_blocks=1 00:17:05.099 --rc geninfo_unexecuted_blocks=1 00:17:05.099 00:17:05.099 ' 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:05.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.099 --rc genhtml_branch_coverage=1 00:17:05.099 --rc genhtml_function_coverage=1 00:17:05.099 --rc genhtml_legend=1 00:17:05.099 --rc geninfo_all_blocks=1 00:17:05.099 --rc geninfo_unexecuted_blocks=1 00:17:05.099 00:17:05.099 ' 00:17:05.099 12:49:30 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:05.099 12:49:30 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:05.099 12:49:30 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:05.099 12:49:30 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74175 00:17:05.099 12:49:30 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.099 12:49:30 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74175 00:17:05.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74175 ']' 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.099 12:49:30 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.099 12:49:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.099 [2024-11-20 12:49:30.530812] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:17:05.099 [2024-11-20 12:49:30.530932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74175 ] 00:17:05.360 [2024-11-20 12:49:30.685726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:05.360 [2024-11-20 12:49:30.762079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.360 [2024-11-20 12:49:30.762203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:05.927 12:49:31 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.927 [2024-11-20 12:49:31.317755] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:05.927 [2024-11-20 12:49:31.319220] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.927 12:49:31 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.927 malloc0 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.927 12:49:31 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.927 12:49:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.927 [2024-11-20 12:49:31.397855] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:05.927 [2024-11-20 12:49:31.397942] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:05.927 [2024-11-20 12:49:31.397949] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:05.927 [2024-11-20 12:49:31.397956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:05.928 [2024-11-20 12:49:31.406833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:05.928 [2024-11-20 12:49:31.406846] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:05.928 [2024-11-20 12:49:31.413763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:05.928 [2024-11-20 12:49:31.413868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:05.928 [2024-11-20 12:49:31.435765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:06.188 1 00:17:06.188 12:49:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.188 12:49:31 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:07.130 12:49:32 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74210 00:17:07.130 12:49:32 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:07.130 12:49:32 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:07.130 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.130 fio-3.35 00:17:07.130 Starting 1 process 00:17:12.415 12:49:37 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74175 00:17:12.415 12:49:37 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:17.732 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74175 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:17.732 12:49:42 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74321 00:17:17.732 12:49:42 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:17.732 12:49:42 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.732 12:49:42 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74321 00:17:17.732 12:49:42 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74321 ']' 00:17:17.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.732 12:49:42 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.732 12:49:42 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.732 12:49:42 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.732 12:49:42 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.732 12:49:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.732 [2024-11-20 12:49:42.536280] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:17:17.732 [2024-11-20 12:49:42.536421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74321 ] 00:17:17.732 [2024-11-20 12:49:42.693694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:17.732 [2024-11-20 12:49:42.775061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.732 [2024-11-20 12:49:42.775164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:17.994 12:49:43 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.994 [2024-11-20 12:49:43.366761] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:17.994 [2024-11-20 12:49:43.368228] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.994 12:49:43 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.994 malloc0 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.994 12:49:43 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.994 [2024-11-20 12:49:43.446854] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:17.994 [2024-11-20 12:49:43.446888] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:17.994 [2024-11-20 12:49:43.446896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:17.994 [2024-11-20 12:49:43.454783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:17.994 [2024-11-20 12:49:43.454806] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:17.994 [2024-11-20 12:49:43.454813] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:17.994 [2024-11-20 12:49:43.454873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:17.994 1 00:17:17.994 12:49:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.994 12:49:43 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74210 00:17:17.994 [2024-11-20 12:49:43.462757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:17.994 [2024-11-20 12:49:43.469252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:17.994 [2024-11-20 12:49:43.476895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:17.994 [2024-11-20 12:49:43.476915] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:14.290 00:18:14.290 fio_test: (groupid=0, jobs=1): err= 0: pid=74213: Wed Nov 20 12:50:32 2024 00:18:14.290 read: IOPS=28.4k, BW=111MiB/s (116MB/s)(6665MiB/60001msec) 00:18:14.290 slat (nsec): min=1056, max=365772, avg=4805.04, stdev=1385.67 00:18:14.290 clat (usec): min=578, max=6037.1k, avg=2212.26, stdev=37246.49 00:18:14.290 lat (usec): min=588, max=6037.1k, avg=2217.06, stdev=37246.48 00:18:14.290 clat percentiles (usec): 00:18:14.290 | 1.00th=[ 1647], 5.00th=[ 1762], 10.00th=[ 1778], 20.00th=[ 1811], 00:18:14.290 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1844], 60.00th=[ 1860], 00:18:14.290 | 70.00th=[ 1876], 80.00th=[ 1893], 90.00th=[ 1942], 95.00th=[ 2933], 00:18:14.290 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 7177], 99.95th=[ 8455], 00:18:14.290 | 99.99th=[12649] 00:18:14.290 bw ( KiB/s): min=20232, max=131880, per=100.00%, avg=125282.15, stdev=16098.74, samples=108 00:18:14.290 iops : min= 5058, max=32970, avg=31320.54, stdev=4024.69, samples=108 00:18:14.290 write: IOPS=28.4k, BW=111MiB/s (116MB/s)(6660MiB/60001msec); 0 zone resets 00:18:14.290 slat (nsec): min=1118, max=1156.1k, avg=4828.23, stdev=1714.24 00:18:14.290 clat (usec): min=583, max=6037.5k, avg=2280.06, stdev=36684.81 00:18:14.290 lat (usec): min=587, max=6037.5k, avg=2284.89, stdev=36684.81 00:18:14.290 clat percentiles (usec): 00:18:14.290 | 1.00th=[ 1680], 5.00th=[ 1844], 10.00th=[ 1860], 20.00th=[ 1893], 00:18:14.290 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:18:14.290 | 70.00th=[ 1975], 80.00th=[ 1991], 90.00th=[ 2024], 95.00th=[ 2868], 00:18:14.290 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 7177], 99.95th=[ 8455], 00:18:14.290 | 99.99th=[12911] 00:18:14.290 bw ( KiB/s): min=21168, max=131736, per=100.00%, avg=125178.37, stdev=16036.12, samples=108 00:18:14.290 iops : min= 5292, max=32934, avg=31294.59, stdev=4009.03, samples=108 00:18:14.290 lat (usec) : 750=0.01%, 1000=0.01% 00:18:14.290 lat (msec) : 2=88.47%, 4=8.76%, 10=2.73%, 20=0.04%, >=2000=0.01% 00:18:14.290 cpu : usr=6.05%, sys=28.31%, ctx=115887, majf=0, minf=14 00:18:14.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:14.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.290 issued rwts: total=1706285,1704931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.290 00:18:14.290 Run status group 0 (all jobs): 00:18:14.290 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6665MiB (6989MB), run=60001-60001msec 00:18:14.290 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6660MiB (6983MB), run=60001-60001msec 00:18:14.290 00:18:14.290 Disk stats (read/write): 00:18:14.290 ublkb1: ios=1702838/1701402, merge=0/0, ticks=3683708/3661412, in_queue=7345120, util=99.89% 00:18:14.291 12:50:32 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.291 [2024-11-20 12:50:32.708069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:14.291 [2024-11-20 12:50:32.750780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:14.291 [2024-11-20 12:50:32.750911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:14.291 [2024-11-20 12:50:32.760762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:14.291 [2024-11-20 12:50:32.760857] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:14.291 [2024-11-20 12:50:32.760867] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.291 12:50:32 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.291 [2024-11-20 12:50:32.768827] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:14.291 [2024-11-20 12:50:32.772413] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:14.291 [2024-11-20 12:50:32.772445] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.291 12:50:32 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:14.291 12:50:32 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:14.291 12:50:32 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74321 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74321 ']' 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74321 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74321 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.291 killing process with pid 74321 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74321' 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74321 00:18:14.291 12:50:32 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74321 00:18:14.291 [2024-11-20 12:50:33.828912] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:14.291 [2024-11-20 12:50:33.828952] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:14.291 00:18:14.291 real 1m4.208s 00:18:14.291 user 1m42.854s 00:18:14.291 sys 0m35.472s 00:18:14.291 12:50:34 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.291 ************************************ 00:18:14.291 END TEST ublk_recovery 00:18:14.291 ************************************ 00:18:14.291 12:50:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:14.291 12:50:34 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:14.291 12:50:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:14.291 12:50:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.291 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:18:14.291 12:50:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:14.291 12:50:34 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:14.291 12:50:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:14.291 12:50:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.291 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:18:14.291 ************************************ 00:18:14.291 START TEST ftl 00:18:14.291 ************************************ 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:14.291 * Looking for test storage... 00:18:14.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.291 12:50:34 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.291 12:50:34 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.291 12:50:34 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.291 12:50:34 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.291 12:50:34 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.291 12:50:34 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.291 12:50:34 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.291 12:50:34 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:14.291 12:50:34 ftl -- scripts/common.sh@345 -- # : 1 00:18:14.291 12:50:34 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.291 12:50:34 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.291 12:50:34 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:14.291 12:50:34 ftl -- scripts/common.sh@353 -- # local d=1 00:18:14.291 12:50:34 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.291 12:50:34 ftl -- scripts/common.sh@355 -- # echo 1 00:18:14.291 12:50:34 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.291 12:50:34 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@353 -- # local d=2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.291 12:50:34 ftl -- scripts/common.sh@355 -- # echo 2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.291 12:50:34 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.291 12:50:34 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.291 12:50:34 ftl -- scripts/common.sh@368 -- # return 0 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.291 --rc genhtml_branch_coverage=1 00:18:14.291 --rc genhtml_function_coverage=1 00:18:14.291 --rc genhtml_legend=1 00:18:14.291 --rc geninfo_all_blocks=1 00:18:14.291 --rc geninfo_unexecuted_blocks=1 00:18:14.291 00:18:14.291 ' 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.291 --rc genhtml_branch_coverage=1 00:18:14.291 --rc genhtml_function_coverage=1 00:18:14.291 --rc genhtml_legend=1 00:18:14.291 --rc geninfo_all_blocks=1 00:18:14.291 --rc geninfo_unexecuted_blocks=1 00:18:14.291 00:18:14.291 ' 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.291 --rc genhtml_branch_coverage=1 00:18:14.291 --rc genhtml_function_coverage=1 00:18:14.291 --rc genhtml_legend=1 00:18:14.291 --rc geninfo_all_blocks=1 00:18:14.291 --rc geninfo_unexecuted_blocks=1 00:18:14.291 00:18:14.291 ' 00:18:14.291 12:50:34 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.291 --rc genhtml_branch_coverage=1 00:18:14.291 --rc genhtml_function_coverage=1 00:18:14.291 --rc genhtml_legend=1 00:18:14.291 --rc geninfo_all_blocks=1 00:18:14.291 --rc geninfo_unexecuted_blocks=1 00:18:14.291 00:18:14.291 ' 00:18:14.291 12:50:34 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:14.291 12:50:34 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:14.291 12:50:34 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.291 12:50:34 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.291 12:50:34 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:14.291 12:50:34 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:14.291 12:50:34 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.291 12:50:34 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:14.291 12:50:34 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:14.291 12:50:34 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.291 12:50:34 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.291 12:50:34 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:14.291 12:50:34 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:14.291 12:50:34 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:14.291 12:50:34 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:14.291 12:50:34 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:14.291 12:50:34 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:14.291 12:50:34 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.291 12:50:34 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.291 12:50:34 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:14.291 12:50:34 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:14.292 12:50:34 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:14.292 12:50:34 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:14.292 12:50:34 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:14.292 12:50:34 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:14.292 12:50:34 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:14.292 12:50:34 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:14.292 12:50:34 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.292 12:50:34 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.292 12:50:34 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.292 12:50:34 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:14.292 12:50:34 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:14.292 12:50:34 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:14.292 12:50:34 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:14.292 12:50:34 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:14.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.292 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.292 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.292 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.292 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.292 12:50:35 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75126 00:18:14.292 12:50:35 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:14.292 12:50:35 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75126 00:18:14.292 12:50:35 ftl -- common/autotest_common.sh@835 -- # '[' -z 75126 ']' 00:18:14.292 12:50:35 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.292 12:50:35 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.292 12:50:35 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.292 12:50:35 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.292 12:50:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:14.292 [2024-11-20 12:50:35.293769] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:18:14.292 [2024-11-20 12:50:35.294055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75126 ] 00:18:14.292 [2024-11-20 12:50:35.449326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.292 [2024-11-20 12:50:35.523162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.292 12:50:36 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.292 12:50:36 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:14.292 12:50:36 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:14.292 12:50:36 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:14.292 12:50:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:14.292 12:50:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@50 -- # break 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@63 -- # break 00:18:14.292 12:50:37 ftl -- ftl/ftl.sh@66 -- # killprocess 75126 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 75126 ']' 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@958 -- # kill -0 75126 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@959 -- # uname 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75126 00:18:14.292 killing process with pid 75126 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75126' 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@973 -- # kill 75126 00:18:14.292 12:50:37 ftl -- common/autotest_common.sh@978 -- # wait 75126 00:18:14.292 12:50:38 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:14.292 12:50:38 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:14.292 12:50:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:14.292 12:50:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.292 12:50:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:14.292 ************************************ 00:18:14.292 START TEST ftl_fio_basic 00:18:14.292 ************************************ 00:18:14.292 12:50:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:14.292 * Looking for test storage... 00:18:14.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.292 --rc genhtml_branch_coverage=1 00:18:14.292 --rc genhtml_function_coverage=1 00:18:14.292 --rc genhtml_legend=1 00:18:14.292 --rc geninfo_all_blocks=1 00:18:14.292 --rc geninfo_unexecuted_blocks=1 00:18:14.292 00:18:14.292 ' 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.292 --rc genhtml_branch_coverage=1 00:18:14.292 --rc genhtml_function_coverage=1 00:18:14.292 --rc genhtml_legend=1 00:18:14.292 --rc geninfo_all_blocks=1 00:18:14.292 --rc geninfo_unexecuted_blocks=1 00:18:14.292 00:18:14.292 ' 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.292 --rc genhtml_branch_coverage=1 00:18:14.292 --rc genhtml_function_coverage=1 00:18:14.292 --rc genhtml_legend=1 00:18:14.292 --rc geninfo_all_blocks=1 00:18:14.292 --rc geninfo_unexecuted_blocks=1 00:18:14.292 00:18:14.292 ' 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.292 --rc genhtml_branch_coverage=1 00:18:14.292 --rc genhtml_function_coverage=1 00:18:14.292 --rc genhtml_legend=1 00:18:14.292 --rc geninfo_all_blocks=1 00:18:14.292 --rc geninfo_unexecuted_blocks=1 00:18:14.292 00:18:14.292 ' 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:14.292 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75248 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75248 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75248 ']' 00:18:14.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.293 12:50:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:14.293 [2024-11-20 12:50:39.246574] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:18:14.293 [2024-11-20 12:50:39.247346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75248 ] 00:18:14.293 [2024-11-20 12:50:39.412145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:14.293 [2024-11-20 12:50:39.539993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.293 [2024-11-20 12:50:39.540296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.293 [2024-11-20 12:50:39.540417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:14.866 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:15.126 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:15.388 { 00:18:15.388 "name": "nvme0n1", 00:18:15.388 "aliases": [ 00:18:15.388 "28dbfb9a-543c-47c7-af69-0000c3af7050" 00:18:15.388 ], 00:18:15.388 "product_name": "NVMe disk", 00:18:15.388 "block_size": 4096, 00:18:15.388 "num_blocks": 1310720, 00:18:15.388 "uuid": "28dbfb9a-543c-47c7-af69-0000c3af7050", 00:18:15.388 "numa_id": -1, 00:18:15.388 "assigned_rate_limits": { 00:18:15.388 "rw_ios_per_sec": 0, 00:18:15.388 "rw_mbytes_per_sec": 0, 00:18:15.388 "r_mbytes_per_sec": 0, 00:18:15.388 "w_mbytes_per_sec": 0 00:18:15.388 }, 00:18:15.388 "claimed": false, 00:18:15.388 "zoned": false, 00:18:15.388 "supported_io_types": { 00:18:15.388 "read": true, 00:18:15.388 "write": true, 00:18:15.388 "unmap": true, 00:18:15.388 "flush": true, 00:18:15.388 "reset": true, 00:18:15.388 "nvme_admin": true, 00:18:15.388 "nvme_io": true, 00:18:15.388 "nvme_io_md": false, 00:18:15.388 "write_zeroes": true, 00:18:15.388 "zcopy": false, 00:18:15.388 "get_zone_info": false, 00:18:15.388 "zone_management": false, 00:18:15.388 "zone_append": false, 00:18:15.388 "compare": true, 00:18:15.388 "compare_and_write": false, 00:18:15.388 "abort": true, 00:18:15.388 "seek_hole": false, 00:18:15.388 "seek_data": false, 00:18:15.388 "copy": true, 00:18:15.388 "nvme_iov_md": false 00:18:15.388 }, 00:18:15.388 "driver_specific": { 00:18:15.388 "nvme": [ 00:18:15.388 { 00:18:15.388 "pci_address": "0000:00:11.0", 00:18:15.388 "trid": { 00:18:15.388 "trtype": "PCIe", 00:18:15.388 "traddr": "0000:00:11.0" 00:18:15.388 }, 00:18:15.388 "ctrlr_data": { 00:18:15.388 "cntlid": 0, 00:18:15.388 "vendor_id": "0x1b36", 00:18:15.388 "model_number": "QEMU NVMe Ctrl", 00:18:15.388 "serial_number": "12341", 00:18:15.388 "firmware_revision": "8.0.0", 00:18:15.388 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:15.388 "oacs": { 00:18:15.388 "security": 0, 00:18:15.388 "format": 1, 00:18:15.388 "firmware": 0, 00:18:15.388 "ns_manage": 1 00:18:15.388 }, 00:18:15.388 "multi_ctrlr": false, 00:18:15.388 "ana_reporting": false 00:18:15.388 }, 00:18:15.388 "vs": { 00:18:15.388 "nvme_version": "1.4" 00:18:15.388 }, 00:18:15.388 "ns_data": { 00:18:15.388 "id": 1, 00:18:15.388 "can_share": false 00:18:15.388 } 00:18:15.388 } 00:18:15.388 ], 00:18:15.388 "mp_policy": "active_passive" 00:18:15.388 } 00:18:15.388 } 00:18:15.388 ]' 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:15.388 12:50:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:15.649 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:15.649 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0ae1273f-3452-4db1-bd9d-db7e8eaae1c5 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0ae1273f-3452-4db1-bd9d-db7e8eaae1c5 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:15.911 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:16.172 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:16.173 { 00:18:16.173 "name": "f97f6c8e-46f8-4b20-9a9a-39c685e637d1", 00:18:16.173 "aliases": [ 00:18:16.173 "lvs/nvme0n1p0" 00:18:16.173 ], 00:18:16.173 "product_name": "Logical Volume", 00:18:16.173 "block_size": 4096, 00:18:16.173 "num_blocks": 26476544, 00:18:16.173 "uuid": "f97f6c8e-46f8-4b20-9a9a-39c685e637d1", 00:18:16.173 "assigned_rate_limits": { 00:18:16.173 "rw_ios_per_sec": 0, 00:18:16.173 "rw_mbytes_per_sec": 0, 00:18:16.173 "r_mbytes_per_sec": 0, 00:18:16.173 "w_mbytes_per_sec": 0 00:18:16.173 }, 00:18:16.173 "claimed": false, 00:18:16.173 "zoned": false, 00:18:16.173 "supported_io_types": { 00:18:16.173 "read": true, 00:18:16.173 "write": true, 00:18:16.173 "unmap": true, 00:18:16.173 "flush": false, 00:18:16.173 "reset": true, 00:18:16.173 "nvme_admin": false, 00:18:16.173 "nvme_io": false, 00:18:16.173 "nvme_io_md": false, 00:18:16.173 "write_zeroes": true, 00:18:16.173 "zcopy": false, 00:18:16.173 "get_zone_info": false, 00:18:16.173 "zone_management": false, 00:18:16.173 "zone_append": false, 00:18:16.173 "compare": false, 00:18:16.173 "compare_and_write": false, 00:18:16.173 "abort": false, 00:18:16.173 "seek_hole": true, 00:18:16.173 "seek_data": true, 00:18:16.173 "copy": false, 00:18:16.173 "nvme_iov_md": false 00:18:16.173 }, 00:18:16.173 "driver_specific": { 00:18:16.173 "lvol": { 00:18:16.173 "lvol_store_uuid": "0ae1273f-3452-4db1-bd9d-db7e8eaae1c5", 00:18:16.173 "base_bdev": "nvme0n1", 00:18:16.173 "thin_provision": true, 00:18:16.173 "num_allocated_clusters": 0, 00:18:16.173 "snapshot": false, 00:18:16.173 "clone": false, 00:18:16.173 "esnap_clone": false 00:18:16.173 } 00:18:16.173 } 00:18:16.173 } 00:18:16.173 ]' 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:16.173 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:16.433 12:50:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:16.691 { 00:18:16.691 "name": "f97f6c8e-46f8-4b20-9a9a-39c685e637d1", 00:18:16.691 "aliases": [ 00:18:16.691 "lvs/nvme0n1p0" 00:18:16.691 ], 00:18:16.691 "product_name": "Logical Volume", 00:18:16.691 "block_size": 4096, 00:18:16.691 "num_blocks": 26476544, 00:18:16.691 "uuid": "f97f6c8e-46f8-4b20-9a9a-39c685e637d1", 00:18:16.691 "assigned_rate_limits": { 00:18:16.691 "rw_ios_per_sec": 0, 00:18:16.691 "rw_mbytes_per_sec": 0, 00:18:16.691 "r_mbytes_per_sec": 0, 00:18:16.691 "w_mbytes_per_sec": 0 00:18:16.691 }, 00:18:16.691 "claimed": false, 00:18:16.691 "zoned": false, 00:18:16.691 "supported_io_types": { 00:18:16.691 "read": true, 00:18:16.691 "write": true, 00:18:16.691 "unmap": true, 00:18:16.691 "flush": false, 00:18:16.691 "reset": true, 00:18:16.691 "nvme_admin": false, 00:18:16.691 "nvme_io": false, 00:18:16.691 "nvme_io_md": false, 00:18:16.691 "write_zeroes": true, 00:18:16.691 "zcopy": false, 00:18:16.691 "get_zone_info": false, 00:18:16.691 "zone_management": false, 00:18:16.691 "zone_append": false, 00:18:16.691 "compare": false, 00:18:16.691 "compare_and_write": false, 00:18:16.691 "abort": false, 00:18:16.691 "seek_hole": true, 00:18:16.691 "seek_data": true, 00:18:16.691 "copy": false, 00:18:16.691 "nvme_iov_md": false 00:18:16.691 }, 00:18:16.691 "driver_specific": { 00:18:16.691 "lvol": { 00:18:16.691 "lvol_store_uuid": "0ae1273f-3452-4db1-bd9d-db7e8eaae1c5", 00:18:16.691 "base_bdev": "nvme0n1", 00:18:16.691 "thin_provision": true, 00:18:16.691 "num_allocated_clusters": 0, 00:18:16.691 "snapshot": false, 00:18:16.691 "clone": false, 00:18:16.691 "esnap_clone": false 00:18:16.691 } 00:18:16.691 } 00:18:16.691 } 00:18:16.691 ]' 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:16.691 12:50:42 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:16.949 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:16.949 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f97f6c8e-46f8-4b20-9a9a-39c685e637d1 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:17.208 { 00:18:17.208 "name": "f97f6c8e-46f8-4b20-9a9a-39c685e637d1", 00:18:17.208 "aliases": [ 00:18:17.208 "lvs/nvme0n1p0" 00:18:17.208 ], 00:18:17.208 "product_name": "Logical Volume", 00:18:17.208 "block_size": 4096, 00:18:17.208 "num_blocks": 26476544, 00:18:17.208 "uuid": "f97f6c8e-46f8-4b20-9a9a-39c685e637d1", 00:18:17.208 "assigned_rate_limits": { 00:18:17.208 "rw_ios_per_sec": 0, 00:18:17.208 "rw_mbytes_per_sec": 0, 00:18:17.208 "r_mbytes_per_sec": 0, 00:18:17.208 "w_mbytes_per_sec": 0 00:18:17.208 }, 00:18:17.208 "claimed": false, 00:18:17.208 "zoned": false, 00:18:17.208 "supported_io_types": { 00:18:17.208 "read": true, 00:18:17.208 "write": true, 00:18:17.208 "unmap": true, 00:18:17.208 "flush": false, 00:18:17.208 "reset": true, 00:18:17.208 "nvme_admin": false, 00:18:17.208 "nvme_io": false, 00:18:17.208 "nvme_io_md": false, 00:18:17.208 "write_zeroes": true, 00:18:17.208 "zcopy": false, 00:18:17.208 "get_zone_info": false, 00:18:17.208 "zone_management": false, 00:18:17.208 "zone_append": false, 00:18:17.208 "compare": false, 00:18:17.208 "compare_and_write": false, 00:18:17.208 "abort": false, 00:18:17.208 "seek_hole": true, 00:18:17.208 "seek_data": true, 00:18:17.208 "copy": false, 00:18:17.208 "nvme_iov_md": false 00:18:17.208 }, 00:18:17.208 "driver_specific": { 00:18:17.208 "lvol": { 00:18:17.208 "lvol_store_uuid": "0ae1273f-3452-4db1-bd9d-db7e8eaae1c5", 00:18:17.208 "base_bdev": "nvme0n1", 00:18:17.208 "thin_provision": true, 00:18:17.208 "num_allocated_clusters": 0, 00:18:17.208 "snapshot": false, 00:18:17.208 "clone": false, 00:18:17.208 "esnap_clone": false 00:18:17.208 } 00:18:17.208 } 00:18:17.208 } 00:18:17.208 ]' 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:17.208 12:50:42 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f97f6c8e-46f8-4b20-9a9a-39c685e637d1 -c nvc0n1p0 --l2p_dram_limit 60 00:18:17.208 [2024-11-20 12:50:42.711276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.711322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:17.208 [2024-11-20 12:50:42.711335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:17.208 [2024-11-20 12:50:42.711342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.711390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.711400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.208 [2024-11-20 12:50:42.711408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:17.208 [2024-11-20 12:50:42.711413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.711440] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:17.208 [2024-11-20 12:50:42.712030] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:17.208 [2024-11-20 12:50:42.712051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.712057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.208 [2024-11-20 12:50:42.712065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:18:17.208 [2024-11-20 12:50:42.712071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.712102] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0152e4a1-1304-42d8-aa77-6eca5d8c5660 00:18:17.208 [2024-11-20 12:50:42.713124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.713153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:17.208 [2024-11-20 12:50:42.713161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:17.208 [2024-11-20 12:50:42.713168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.717864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.717892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.208 [2024-11-20 12:50:42.717900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:18:17.208 [2024-11-20 12:50:42.717907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.717988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.717997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.208 [2024-11-20 12:50:42.718003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:17.208 [2024-11-20 12:50:42.718013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.718055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.718063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:17.208 [2024-11-20 12:50:42.718070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:17.208 [2024-11-20 12:50:42.718077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.718100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:17.208 [2024-11-20 12:50:42.720943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.720967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.208 [2024-11-20 12:50:42.720977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:18:17.208 [2024-11-20 12:50:42.720984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.721016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.721023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:17.208 [2024-11-20 12:50:42.721030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:17.208 [2024-11-20 12:50:42.721036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.208 [2024-11-20 12:50:42.721056] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:17.208 [2024-11-20 12:50:42.721169] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:17.208 [2024-11-20 12:50:42.721181] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:17.208 [2024-11-20 12:50:42.721190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:17.208 [2024-11-20 12:50:42.721199] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:17.208 [2024-11-20 12:50:42.721206] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:17.208 [2024-11-20 12:50:42.721213] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:17.208 [2024-11-20 12:50:42.721219] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:17.208 [2024-11-20 12:50:42.721226] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:17.208 [2024-11-20 12:50:42.721232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:17.208 [2024-11-20 12:50:42.721238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.208 [2024-11-20 12:50:42.721246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:17.208 [2024-11-20 12:50:42.721254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:18:17.209 [2024-11-20 12:50:42.721260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.209 [2024-11-20 12:50:42.721328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.209 [2024-11-20 12:50:42.721334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:17.209 [2024-11-20 12:50:42.721341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:17.209 [2024-11-20 12:50:42.721346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.209 [2024-11-20 12:50:42.721433] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:17.209 [2024-11-20 12:50:42.721440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:17.209 [2024-11-20 12:50:42.721449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:17.209 [2024-11-20 12:50:42.721467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:17.209 [2024-11-20 12:50:42.721485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.209 [2024-11-20 12:50:42.721497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:17.209 [2024-11-20 12:50:42.721503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:17.209 [2024-11-20 12:50:42.721509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.209 [2024-11-20 12:50:42.721514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:17.209 [2024-11-20 12:50:42.721520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:17.209 [2024-11-20 12:50:42.721525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:17.209 [2024-11-20 12:50:42.721540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:17.209 [2024-11-20 12:50:42.721558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:17.209 [2024-11-20 12:50:42.721574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:17.209 [2024-11-20 12:50:42.721591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:17.209 [2024-11-20 12:50:42.721607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:17.209 [2024-11-20 12:50:42.721629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.209 [2024-11-20 12:50:42.721640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:17.209 [2024-11-20 12:50:42.721655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:17.209 [2024-11-20 12:50:42.721661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.209 [2024-11-20 12:50:42.721666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:17.209 [2024-11-20 12:50:42.721672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:17.209 [2024-11-20 12:50:42.721677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:17.209 [2024-11-20 12:50:42.721688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:17.209 [2024-11-20 12:50:42.721695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721700] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:17.209 [2024-11-20 12:50:42.721707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:17.209 [2024-11-20 12:50:42.721712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.209 [2024-11-20 12:50:42.721725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:17.209 [2024-11-20 12:50:42.721734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:17.209 [2024-11-20 12:50:42.721757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:17.209 [2024-11-20 12:50:42.721764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:17.209 [2024-11-20 12:50:42.721770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:17.209 [2024-11-20 12:50:42.721776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:17.209 [2024-11-20 12:50:42.721784] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:17.209 [2024-11-20 12:50:42.721793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.209 [2024-11-20 12:50:42.721800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:17.209 [2024-11-20 12:50:42.721806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:17.209 [2024-11-20 12:50:42.721812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:17.209 [2024-11-20 12:50:42.721819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:17.209 [2024-11-20 12:50:42.721824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:17.209 [2024-11-20 12:50:42.721831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:17.209 [2024-11-20 12:50:42.721837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:17.209 [2024-11-20 12:50:42.721845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:17.209 [2024-11-20 12:50:42.721850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:17.209 [2024-11-20 12:50:42.721859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:17.209 [2024-11-20 12:50:42.721864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:17.209 [2024-11-20 12:50:42.721872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:17.209 [2024-11-20 12:50:42.721877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:17.209 [2024-11-20 12:50:42.721884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:17.209 [2024-11-20 12:50:42.721889] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:17.209 [2024-11-20 12:50:42.721897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.209 [2024-11-20 12:50:42.721905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:17.209 [2024-11-20 12:50:42.721912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:17.209 [2024-11-20 12:50:42.721919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:17.209 [2024-11-20 12:50:42.721925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:17.209 [2024-11-20 12:50:42.721931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.209 [2024-11-20 12:50:42.721943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:17.209 [2024-11-20 12:50:42.721949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:18:17.209 [2024-11-20 12:50:42.721956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.209 [2024-11-20 12:50:42.722017] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:17.209 [2024-11-20 12:50:42.722029] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:20.503 [2024-11-20 12:50:45.459042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.459229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:20.503 [2024-11-20 12:50:45.459253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2737.012 ms 00:18:20.503 [2024-11-20 12:50:45.459263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.484375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.484419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.503 [2024-11-20 12:50:45.484432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.901 ms 00:18:20.503 [2024-11-20 12:50:45.484441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.484566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.484579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:20.503 [2024-11-20 12:50:45.484587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:20.503 [2024-11-20 12:50:45.484598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.526578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.526620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.503 [2024-11-20 12:50:45.526635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.939 ms 00:18:20.503 [2024-11-20 12:50:45.526645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.526683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.526693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.503 [2024-11-20 12:50:45.526702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:20.503 [2024-11-20 12:50:45.526710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.527077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.527105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.503 [2024-11-20 12:50:45.527114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:18:20.503 [2024-11-20 12:50:45.527126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.527243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.527259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.503 [2024-11-20 12:50:45.527267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:20.503 [2024-11-20 12:50:45.527278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.545182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.545214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.503 [2024-11-20 12:50:45.545224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.879 ms 00:18:20.503 [2024-11-20 12:50:45.545233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.556544] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:20.503 [2024-11-20 12:50:45.570406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.570449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:20.503 [2024-11-20 12:50:45.570461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.093 ms 00:18:20.503 [2024-11-20 12:50:45.570471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.503 [2024-11-20 12:50:45.623397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.503 [2024-11-20 12:50:45.623553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:20.503 [2024-11-20 12:50:45.623577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.892 ms 00:18:20.504 [2024-11-20 12:50:45.623586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.623783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.623800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:20.504 [2024-11-20 12:50:45.623813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:18:20.504 [2024-11-20 12:50:45.623821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.646610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.646720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:20.504 [2024-11-20 12:50:45.646792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.737 ms 00:18:20.504 [2024-11-20 12:50:45.646817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.668574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.668678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:20.504 [2024-11-20 12:50:45.668751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.708 ms 00:18:20.504 [2024-11-20 12:50:45.668773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.669356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.669433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:20.504 [2024-11-20 12:50:45.669482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:18:20.504 [2024-11-20 12:50:45.669504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.733453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.733574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:20.504 [2024-11-20 12:50:45.733632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.894 ms 00:18:20.504 [2024-11-20 12:50:45.733658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.757539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.757646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:20.504 [2024-11-20 12:50:45.757697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.764 ms 00:18:20.504 [2024-11-20 12:50:45.757720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.780470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.780574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:20.504 [2024-11-20 12:50:45.780622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.680 ms 00:18:20.504 [2024-11-20 12:50:45.780645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.803057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.803167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:20.504 [2024-11-20 12:50:45.803219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.361 ms 00:18:20.504 [2024-11-20 12:50:45.803242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.803313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.803340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:20.504 [2024-11-20 12:50:45.803364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:20.504 [2024-11-20 12:50:45.803385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.803483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.504 [2024-11-20 12:50:45.803602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:20.504 [2024-11-20 12:50:45.803624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:20.504 [2024-11-20 12:50:45.803643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.504 [2024-11-20 12:50:45.804540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3092.846 ms, result 0 00:18:20.504 { 00:18:20.504 "name": "ftl0", 00:18:20.504 "uuid": "0152e4a1-1304-42d8-aa77-6eca5d8c5660" 00:18:20.504 } 00:18:20.504 12:50:45 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:20.504 12:50:45 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:20.504 12:50:45 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:20.504 12:50:45 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:20.504 12:50:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:20.504 12:50:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:20.504 12:50:45 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:20.766 12:50:46 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:20.766 [ 00:18:20.766 { 00:18:20.766 "name": "ftl0", 00:18:20.766 "aliases": [ 00:18:20.766 "0152e4a1-1304-42d8-aa77-6eca5d8c5660" 00:18:20.766 ], 00:18:20.766 "product_name": "FTL disk", 00:18:20.766 "block_size": 4096, 00:18:20.766 "num_blocks": 20971520, 00:18:20.766 "uuid": "0152e4a1-1304-42d8-aa77-6eca5d8c5660", 00:18:20.766 "assigned_rate_limits": { 00:18:20.766 "rw_ios_per_sec": 0, 00:18:20.766 "rw_mbytes_per_sec": 0, 00:18:20.766 "r_mbytes_per_sec": 0, 00:18:20.766 "w_mbytes_per_sec": 0 00:18:20.766 }, 00:18:20.766 "claimed": false, 00:18:20.766 "zoned": false, 00:18:20.766 "supported_io_types": { 00:18:20.766 "read": true, 00:18:20.766 "write": true, 00:18:20.766 "unmap": true, 00:18:20.766 "flush": true, 00:18:20.766 "reset": false, 00:18:20.766 "nvme_admin": false, 00:18:20.766 "nvme_io": false, 00:18:20.766 "nvme_io_md": false, 00:18:20.766 "write_zeroes": true, 00:18:20.766 "zcopy": false, 00:18:20.766 "get_zone_info": false, 00:18:20.766 "zone_management": false, 00:18:20.766 "zone_append": false, 00:18:20.766 "compare": false, 00:18:20.766 "compare_and_write": false, 00:18:20.766 "abort": false, 00:18:20.766 "seek_hole": false, 00:18:20.766 "seek_data": false, 00:18:20.766 "copy": false, 00:18:20.766 "nvme_iov_md": false 00:18:20.766 }, 00:18:20.766 "driver_specific": { 00:18:20.766 "ftl": { 00:18:20.766 "base_bdev": "f97f6c8e-46f8-4b20-9a9a-39c685e637d1", 00:18:20.766 "cache": "nvc0n1p0" 00:18:20.766 } 00:18:20.766 } 00:18:20.766 } 00:18:20.766 ] 00:18:20.766 12:50:46 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:20.766 12:50:46 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:20.766 12:50:46 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:21.027 12:50:46 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:21.027 12:50:46 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:21.290 [2024-11-20 12:50:46.625279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.625324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:21.290 [2024-11-20 12:50:46.625338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:21.290 [2024-11-20 12:50:46.625348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.625379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:21.290 [2024-11-20 12:50:46.628016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.628045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:21.290 [2024-11-20 12:50:46.628058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.619 ms 00:18:21.290 [2024-11-20 12:50:46.628066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.628478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.628491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:21.290 [2024-11-20 12:50:46.628501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:18:21.290 [2024-11-20 12:50:46.628508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.631763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.631787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:21.290 [2024-11-20 12:50:46.631797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:18:21.290 [2024-11-20 12:50:46.631805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.637924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.638044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:21.290 [2024-11-20 12:50:46.638064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:18:21.290 [2024-11-20 12:50:46.638073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.661424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.661540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:21.290 [2024-11-20 12:50:46.661558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.257 ms 00:18:21.290 [2024-11-20 12:50:46.661565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.675685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.675720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:21.290 [2024-11-20 12:50:46.675734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.071 ms 00:18:21.290 [2024-11-20 12:50:46.675757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.675939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.675950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:21.290 [2024-11-20 12:50:46.675960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:21.290 [2024-11-20 12:50:46.675967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.698366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.698395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:21.290 [2024-11-20 12:50:46.698407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.377 ms 00:18:21.290 [2024-11-20 12:50:46.698415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.720606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.720635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:21.290 [2024-11-20 12:50:46.720646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.149 ms 00:18:21.290 [2024-11-20 12:50:46.720653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.742974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.743089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:21.290 [2024-11-20 12:50:46.743108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.276 ms 00:18:21.290 [2024-11-20 12:50:46.743114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.765419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.290 [2024-11-20 12:50:46.765448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:21.290 [2024-11-20 12:50:46.765460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.202 ms 00:18:21.290 [2024-11-20 12:50:46.765467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.290 [2024-11-20 12:50:46.765509] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:21.290 [2024-11-20 12:50:46.765521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:21.290 [2024-11-20 12:50:46.765861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.765993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:21.291 [2024-11-20 12:50:46.766415] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:21.291 [2024-11-20 12:50:46.766424] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0152e4a1-1304-42d8-aa77-6eca5d8c5660 00:18:21.291 [2024-11-20 12:50:46.766432] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:21.291 [2024-11-20 12:50:46.766442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:21.291 [2024-11-20 12:50:46.766449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:21.291 [2024-11-20 12:50:46.766460] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:21.291 [2024-11-20 12:50:46.766466] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:21.291 [2024-11-20 12:50:46.766475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:21.291 [2024-11-20 12:50:46.766482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:21.291 [2024-11-20 12:50:46.766490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:21.291 [2024-11-20 12:50:46.766496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:21.291 [2024-11-20 12:50:46.766504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.291 [2024-11-20 12:50:46.766512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:21.291 [2024-11-20 12:50:46.766521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:18:21.291 [2024-11-20 12:50:46.766528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.291 [2024-11-20 12:50:46.778834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.291 [2024-11-20 12:50:46.778863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:21.291 [2024-11-20 12:50:46.778874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.267 ms 00:18:21.291 [2024-11-20 12:50:46.778881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.291 [2024-11-20 12:50:46.779235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.291 [2024-11-20 12:50:46.779248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:21.291 [2024-11-20 12:50:46.779259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:21.291 [2024-11-20 12:50:46.779265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.822129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.822165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.557 [2024-11-20 12:50:46.822179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.822188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.822244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.822252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.557 [2024-11-20 12:50:46.822261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.822268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.822345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.822355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.557 [2024-11-20 12:50:46.822366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.822374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.822397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.822404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.557 [2024-11-20 12:50:46.822413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.822420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.903343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.903385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.557 [2024-11-20 12:50:46.903397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.903404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.964893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.965065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.557 [2024-11-20 12:50:46.965084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.965093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.965176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.965186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.557 [2024-11-20 12:50:46.965195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.965206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.965264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.965273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.557 [2024-11-20 12:50:46.965282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.965289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.965399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.965409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.557 [2024-11-20 12:50:46.965418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.557 [2024-11-20 12:50:46.965425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.557 [2024-11-20 12:50:46.965476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.557 [2024-11-20 12:50:46.965485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:21.557 [2024-11-20 12:50:46.965494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.558 [2024-11-20 12:50:46.965501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.558 [2024-11-20 12:50:46.965538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.558 [2024-11-20 12:50:46.965546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.558 [2024-11-20 12:50:46.965555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.558 [2024-11-20 12:50:46.965562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.558 [2024-11-20 12:50:46.965616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.558 [2024-11-20 12:50:46.965626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.558 [2024-11-20 12:50:46.965635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.558 [2024-11-20 12:50:46.965641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.558 [2024-11-20 12:50:46.965815] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.485 ms, result 0 00:18:21.558 true 00:18:21.558 12:50:46 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75248 00:18:21.558 12:50:46 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75248 ']' 00:18:21.558 12:50:46 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75248 00:18:21.558 12:50:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:21.558 12:50:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.558 12:50:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75248 00:18:21.558 killing process with pid 75248 00:18:21.558 12:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.558 12:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.558 12:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75248' 00:18:21.558 12:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75248 00:18:21.558 12:50:47 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75248 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:31.544 12:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:31.544 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:31.544 fio-3.35 00:18:31.544 Starting 1 thread 00:18:35.723 00:18:35.723 test: (groupid=0, jobs=1): err= 0: pid=75440: Wed Nov 20 12:51:00 2024 00:18:35.723 read: IOPS=1354, BW=89.9MiB/s (94.3MB/s)(255MiB/2830msec) 00:18:35.723 slat (nsec): min=2854, max=15648, avg=3627.38, stdev=1443.45 00:18:35.723 clat (usec): min=236, max=26263, avg=326.36, stdev=422.10 00:18:35.723 lat (usec): min=240, max=26267, avg=329.98, stdev=422.17 00:18:35.723 clat percentiles (usec): 00:18:35.723 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 302], 00:18:35.723 | 30.00th=[ 306], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:18:35.723 | 70.00th=[ 318], 80.00th=[ 322], 90.00th=[ 363], 95.00th=[ 416], 00:18:35.723 | 99.00th=[ 562], 99.50th=[ 627], 99.90th=[ 791], 99.95th=[ 906], 00:18:35.723 | 99.99th=[26346] 00:18:35.723 write: IOPS=1363, BW=90.6MiB/s (95.0MB/s)(256MiB/2827msec); 0 zone resets 00:18:35.723 slat (usec): min=13, max=140, avg=19.63, stdev= 4.76 00:18:35.723 clat (usec): min=263, max=28724, avg=372.08, stdev=461.55 00:18:35.723 lat (usec): min=285, max=28738, avg=391.72, stdev=461.50 00:18:35.723 clat percentiles (usec): 00:18:35.723 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 330], 00:18:35.723 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 355], 00:18:35.723 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 478], 00:18:35.723 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 881], 99.95th=[ 1074], 00:18:35.723 | 99.99th=[28705] 00:18:35.723 bw ( KiB/s): min=87312, max=96424, per=99.53%, avg=92316.80, stdev=3401.90, samples=5 00:18:35.723 iops : min= 1284, max= 1418, avg=1357.60, stdev=50.03, samples=5 00:18:35.723 lat (usec) : 250=0.14%, 500=96.75%, 750=2.90%, 1000=0.17% 00:18:35.723 lat (msec) : 2=0.01%, 50=0.03% 00:18:35.723 cpu : usr=99.36%, sys=0.07%, ctx=6, majf=0, minf=1169 00:18:35.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:35.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.723 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:35.723 00:18:35.723 Run status group 0 (all jobs): 00:18:35.723 READ: bw=89.9MiB/s (94.3MB/s), 89.9MiB/s-89.9MiB/s (94.3MB/s-94.3MB/s), io=255MiB (267MB), run=2830-2830msec 00:18:35.723 WRITE: bw=90.6MiB/s (95.0MB/s), 90.6MiB/s-90.6MiB/s (95.0MB/s-95.0MB/s), io=256MiB (269MB), run=2827-2827msec 00:18:36.655 ----------------------------------------------------- 00:18:36.655 Suppressions used: 00:18:36.655 count bytes template 00:18:36.655 1 5 /usr/src/fio/parse.c 00:18:36.655 1 8 libtcmalloc_minimal.so 00:18:36.655 1 904 libcrypto.so 00:18:36.655 ----------------------------------------------------- 00:18:36.655 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:36.655 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:36.656 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:36.656 12:51:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:36.913 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:36.913 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:36.913 fio-3.35 00:18:36.913 Starting 2 threads 00:19:03.442 00:19:03.442 first_half: (groupid=0, jobs=1): err= 0: pid=75526: Wed Nov 20 12:51:25 2024 00:19:03.442 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(255MiB/21512msec) 00:19:03.442 slat (nsec): min=2995, max=17508, avg=3683.53, stdev=692.65 00:19:03.442 clat (usec): min=694, max=417831, avg=33532.07, stdev=17732.66 00:19:03.442 lat (usec): min=697, max=417835, avg=33535.75, stdev=17732.69 00:19:03.442 clat percentiles (msec): 00:19:03.442 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 29], 00:19:03.442 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:19:03.442 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 51], 00:19:03.442 | 99.00th=[ 123], 99.50th=[ 140], 99.90th=[ 201], 99.95th=[ 334], 00:19:03.442 | 99.99th=[ 405] 00:19:03.442 write: IOPS=3409, BW=13.3MiB/s (14.0MB/s)(256MiB/19224msec); 0 zone resets 00:19:03.442 slat (usec): min=3, max=2099, avg= 5.42, stdev= 9.91 00:19:03.442 clat (usec): min=329, max=78906, avg=8582.91, stdev=13252.70 00:19:03.442 lat (usec): min=335, max=78910, avg=8588.33, stdev=13252.82 00:19:03.442 clat percentiles (usec): 00:19:03.442 | 1.00th=[ 660], 5.00th=[ 807], 10.00th=[ 979], 20.00th=[ 1369], 00:19:03.442 | 30.00th=[ 2671], 40.00th=[ 3654], 50.00th=[ 4555], 60.00th=[ 5211], 00:19:03.442 | 70.00th=[ 5800], 80.00th=[11469], 90.00th=[17957], 95.00th=[29754], 00:19:03.442 | 99.00th=[65799], 99.50th=[68682], 99.90th=[73925], 99.95th=[74974], 00:19:03.442 | 99.99th=[78119] 00:19:03.442 bw ( KiB/s): min= 2168, max=40256, per=93.67%, avg=24963.38, stdev=12960.12, samples=21 00:19:03.442 iops : min= 542, max=10064, avg=6240.81, stdev=3240.02, samples=21 00:19:03.442 lat (usec) : 500=0.06%, 750=1.63%, 1000=3.66% 00:19:03.442 lat (msec) : 2=7.15%, 4=9.36%, 10=17.74%, 20=7.68%, 50=47.82% 00:19:03.442 lat (msec) : 100=4.09%, 250=0.78%, 500=0.04% 00:19:03.442 cpu : usr=99.24%, sys=0.14%, ctx=39, majf=0, minf=5585 00:19:03.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:03.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.443 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.443 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.443 second_half: (groupid=0, jobs=1): err= 0: pid=75527: Wed Nov 20 12:51:25 2024 00:19:03.443 read: IOPS=3016, BW=11.8MiB/s (12.4MB/s)(255MiB/21654msec) 00:19:03.443 slat (nsec): min=2961, max=18187, avg=3676.32, stdev=629.37 00:19:03.443 clat (usec): min=675, max=426980, avg=32997.97, stdev=19500.54 00:19:03.443 lat (usec): min=679, max=426984, avg=33001.64, stdev=19500.59 00:19:03.443 clat percentiles (msec): 00:19:03.443 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 29], 00:19:03.443 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:19:03.443 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 48], 00:19:03.443 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 207], 99.95th=[ 321], 00:19:03.443 | 99.99th=[ 422] 00:19:03.443 write: IOPS=3331, BW=13.0MiB/s (13.6MB/s)(256MiB/19674msec); 0 zone resets 00:19:03.443 slat (usec): min=3, max=654, avg= 5.29, stdev= 3.98 00:19:03.443 clat (usec): min=354, max=78946, avg=9386.81, stdev=14287.39 00:19:03.443 lat (usec): min=360, max=78951, avg=9392.10, stdev=14287.49 00:19:03.443 clat percentiles (usec): 00:19:03.443 | 1.00th=[ 660], 5.00th=[ 799], 10.00th=[ 955], 20.00th=[ 1205], 00:19:03.443 | 30.00th=[ 1614], 40.00th=[ 2769], 50.00th=[ 4146], 60.00th=[ 5145], 00:19:03.443 | 70.00th=[ 6521], 80.00th=[14615], 90.00th=[23987], 95.00th=[38011], 00:19:03.443 | 99.00th=[66847], 99.50th=[68682], 99.90th=[74974], 99.95th=[76022], 00:19:03.443 | 99.99th=[78119] 00:19:03.443 bw ( KiB/s): min= 432, max=71824, per=85.54%, avg=22795.13, stdev=18798.49, samples=23 00:19:03.443 iops : min= 108, max=17956, avg=5698.78, stdev=4699.62, samples=23 00:19:03.443 lat (usec) : 500=0.04%, 750=1.74%, 1000=4.12% 00:19:03.443 lat (msec) : 2=10.78%, 4=7.99%, 10=14.10%, 20=7.74%, 50=48.80% 00:19:03.443 lat (msec) : 100=3.62%, 250=1.04%, 500=0.04% 00:19:03.443 cpu : usr=99.48%, sys=0.11%, ctx=36, majf=0, minf=5522 00:19:03.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:03.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:03.443 issued rwts: total=65319,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:03.443 00:19:03.443 Run status group 0 (all jobs): 00:19:03.443 READ: bw=23.6MiB/s (24.7MB/s), 11.8MiB/s-11.9MiB/s (12.4MB/s-12.4MB/s), io=510MiB (535MB), run=21512-21654msec 00:19:03.443 WRITE: bw=26.0MiB/s (27.3MB/s), 13.0MiB/s-13.3MiB/s (13.6MB/s-14.0MB/s), io=512MiB (537MB), run=19224-19674msec 00:19:03.443 ----------------------------------------------------- 00:19:03.443 Suppressions used: 00:19:03.443 count bytes template 00:19:03.443 2 10 /usr/src/fio/parse.c 00:19:03.443 4 384 /usr/src/fio/iolog.c 00:19:03.443 1 8 libtcmalloc_minimal.so 00:19:03.443 1 904 libcrypto.so 00:19:03.443 ----------------------------------------------------- 00:19:03.443 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.443 12:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:03.443 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:03.443 fio-3.35 00:19:03.443 Starting 1 thread 00:19:15.663 00:19:15.663 test: (groupid=0, jobs=1): err= 0: pid=75818: Wed Nov 20 12:51:40 2024 00:19:15.663 read: IOPS=8200, BW=32.0MiB/s (33.6MB/s)(255MiB/7951msec) 00:19:15.663 slat (nsec): min=3065, max=22605, avg=3429.70, stdev=643.49 00:19:15.663 clat (usec): min=480, max=29940, avg=15602.12, stdev=1792.14 00:19:15.663 lat (usec): min=484, max=29943, avg=15605.55, stdev=1792.17 00:19:15.663 clat percentiles (usec): 00:19:15.663 | 1.00th=[14353], 5.00th=[14484], 10.00th=[14615], 20.00th=[14746], 00:19:15.663 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:19:15.663 | 70.00th=[15401], 80.00th=[15533], 90.00th=[16581], 95.00th=[19792], 00:19:15.663 | 99.00th=[23462], 99.50th=[23987], 99.90th=[27657], 99.95th=[28705], 00:19:15.663 | 99.99th=[29230] 00:19:15.663 write: IOPS=16.8k, BW=65.5MiB/s (68.7MB/s)(256MiB/3908msec); 0 zone resets 00:19:15.663 slat (usec): min=4, max=321, avg= 5.79, stdev= 2.40 00:19:15.663 clat (usec): min=413, max=45539, avg=7593.40, stdev=10028.27 00:19:15.663 lat (usec): min=422, max=45544, avg=7599.19, stdev=10028.18 00:19:15.663 clat percentiles (usec): 00:19:15.663 | 1.00th=[ 611], 5.00th=[ 783], 10.00th=[ 873], 20.00th=[ 1020], 00:19:15.663 | 30.00th=[ 1172], 40.00th=[ 1532], 50.00th=[ 4555], 60.00th=[ 5211], 00:19:15.663 | 70.00th=[ 6128], 80.00th=[ 7570], 90.00th=[29492], 95.00th=[31327], 00:19:15.663 | 99.00th=[34341], 99.50th=[35390], 99.90th=[39584], 99.95th=[40633], 00:19:15.663 | 99.99th=[43779] 00:19:15.663 bw ( KiB/s): min=45736, max=98160, per=97.70%, avg=65536.00, stdev=18227.91, samples=8 00:19:15.663 iops : min=11434, max=24540, avg=16384.00, stdev=4556.98, samples=8 00:19:15.663 lat (usec) : 500=0.03%, 750=1.95%, 1000=7.39% 00:19:15.663 lat (msec) : 2=11.33%, 4=1.70%, 10=19.66%, 20=47.64%, 50=10.30% 00:19:15.663 cpu : usr=99.13%, sys=0.19%, ctx=35, majf=0, minf=5565 00:19:15.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:15.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.663 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:15.663 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:15.663 00:19:15.663 Run status group 0 (all jobs): 00:19:15.663 READ: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=255MiB (267MB), run=7951-7951msec 00:19:15.663 WRITE: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=256MiB (268MB), run=3908-3908msec 00:19:17.050 ----------------------------------------------------- 00:19:17.050 Suppressions used: 00:19:17.050 count bytes template 00:19:17.050 1 5 /usr/src/fio/parse.c 00:19:17.050 2 192 /usr/src/fio/iolog.c 00:19:17.050 1 8 libtcmalloc_minimal.so 00:19:17.050 1 904 libcrypto.so 00:19:17.050 ----------------------------------------------------- 00:19:17.050 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:17.050 Remove shared memory files 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57137 /dev/shm/spdk_tgt_trace.pid74175 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:17.050 ************************************ 00:19:17.050 END TEST ftl_fio_basic 00:19:17.050 ************************************ 00:19:17.050 00:19:17.050 real 1m3.233s 00:19:17.050 user 2m21.792s 00:19:17.050 sys 0m2.665s 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.050 12:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:17.050 12:51:42 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:17.050 12:51:42 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:17.050 12:51:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.050 12:51:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:17.050 ************************************ 00:19:17.050 START TEST ftl_bdevperf 00:19:17.050 ************************************ 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:17.050 * Looking for test storage... 00:19:17.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:17.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.050 --rc genhtml_branch_coverage=1 00:19:17.050 --rc genhtml_function_coverage=1 00:19:17.050 --rc genhtml_legend=1 00:19:17.050 --rc geninfo_all_blocks=1 00:19:17.050 --rc geninfo_unexecuted_blocks=1 00:19:17.050 00:19:17.050 ' 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:17.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.050 --rc genhtml_branch_coverage=1 00:19:17.050 --rc genhtml_function_coverage=1 00:19:17.050 --rc genhtml_legend=1 00:19:17.050 --rc geninfo_all_blocks=1 00:19:17.050 --rc geninfo_unexecuted_blocks=1 00:19:17.050 00:19:17.050 ' 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:17.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.050 --rc genhtml_branch_coverage=1 00:19:17.050 --rc genhtml_function_coverage=1 00:19:17.050 --rc genhtml_legend=1 00:19:17.050 --rc geninfo_all_blocks=1 00:19:17.050 --rc geninfo_unexecuted_blocks=1 00:19:17.050 00:19:17.050 ' 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:17.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.050 --rc genhtml_branch_coverage=1 00:19:17.050 --rc genhtml_function_coverage=1 00:19:17.050 --rc genhtml_legend=1 00:19:17.050 --rc geninfo_all_blocks=1 00:19:17.050 --rc geninfo_unexecuted_blocks=1 00:19:17.050 00:19:17.050 ' 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:17.050 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76044 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76044 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76044 ']' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.051 12:51:42 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:17.051 [2024-11-20 12:51:42.535042] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:19:17.051 [2024-11-20 12:51:42.535195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76044 ] 00:19:17.316 [2024-11-20 12:51:42.700003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.316 [2024-11-20 12:51:42.819493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:17.951 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:18.211 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:18.472 { 00:19:18.472 "name": "nvme0n1", 00:19:18.472 "aliases": [ 00:19:18.472 "549ba577-3e4d-4771-903e-c1d2f83d7110" 00:19:18.472 ], 00:19:18.472 "product_name": "NVMe disk", 00:19:18.472 "block_size": 4096, 00:19:18.472 "num_blocks": 1310720, 00:19:18.472 "uuid": "549ba577-3e4d-4771-903e-c1d2f83d7110", 00:19:18.472 "numa_id": -1, 00:19:18.472 "assigned_rate_limits": { 00:19:18.472 "rw_ios_per_sec": 0, 00:19:18.472 "rw_mbytes_per_sec": 0, 00:19:18.472 "r_mbytes_per_sec": 0, 00:19:18.472 "w_mbytes_per_sec": 0 00:19:18.472 }, 00:19:18.472 "claimed": true, 00:19:18.472 "claim_type": "read_many_write_one", 00:19:18.472 "zoned": false, 00:19:18.472 "supported_io_types": { 00:19:18.472 "read": true, 00:19:18.472 "write": true, 00:19:18.472 "unmap": true, 00:19:18.472 "flush": true, 00:19:18.472 "reset": true, 00:19:18.472 "nvme_admin": true, 00:19:18.472 "nvme_io": true, 00:19:18.472 "nvme_io_md": false, 00:19:18.472 "write_zeroes": true, 00:19:18.472 "zcopy": false, 00:19:18.472 "get_zone_info": false, 00:19:18.472 "zone_management": false, 00:19:18.472 "zone_append": false, 00:19:18.472 "compare": true, 00:19:18.472 "compare_and_write": false, 00:19:18.472 "abort": true, 00:19:18.472 "seek_hole": false, 00:19:18.472 "seek_data": false, 00:19:18.472 "copy": true, 00:19:18.472 "nvme_iov_md": false 00:19:18.472 }, 00:19:18.472 "driver_specific": { 00:19:18.472 "nvme": [ 00:19:18.472 { 00:19:18.472 "pci_address": "0000:00:11.0", 00:19:18.472 "trid": { 00:19:18.472 "trtype": "PCIe", 00:19:18.472 "traddr": "0000:00:11.0" 00:19:18.472 }, 00:19:18.472 "ctrlr_data": { 00:19:18.472 "cntlid": 0, 00:19:18.472 "vendor_id": "0x1b36", 00:19:18.472 "model_number": "QEMU NVMe Ctrl", 00:19:18.472 "serial_number": "12341", 00:19:18.472 "firmware_revision": "8.0.0", 00:19:18.472 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:18.472 "oacs": { 00:19:18.472 "security": 0, 00:19:18.472 "format": 1, 00:19:18.472 "firmware": 0, 00:19:18.472 "ns_manage": 1 00:19:18.472 }, 00:19:18.472 "multi_ctrlr": false, 00:19:18.472 "ana_reporting": false 00:19:18.472 }, 00:19:18.472 "vs": { 00:19:18.472 "nvme_version": "1.4" 00:19:18.472 }, 00:19:18.472 "ns_data": { 00:19:18.472 "id": 1, 00:19:18.472 "can_share": false 00:19:18.472 } 00:19:18.472 } 00:19:18.472 ], 00:19:18.472 "mp_policy": "active_passive" 00:19:18.472 } 00:19:18.472 } 00:19:18.472 ]' 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:18.472 12:51:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:18.733 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0ae1273f-3452-4db1-bd9d-db7e8eaae1c5 00:19:18.733 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:18.733 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ae1273f-3452-4db1-bd9d-db7e8eaae1c5 00:19:19.006 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:19.276 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=57cfb3f1-2f3b-497b-aff4-d83ccacc3532 00:19:19.276 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 57cfb3f1-2f3b-497b-aff4-d83ccacc3532 00:19:19.276 12:51:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:19.538 12:51:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.538 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:19.538 { 00:19:19.538 "name": "975efcc6-4c97-47c9-894f-3a913b05576f", 00:19:19.538 "aliases": [ 00:19:19.538 "lvs/nvme0n1p0" 00:19:19.538 ], 00:19:19.538 "product_name": "Logical Volume", 00:19:19.538 "block_size": 4096, 00:19:19.538 "num_blocks": 26476544, 00:19:19.538 "uuid": "975efcc6-4c97-47c9-894f-3a913b05576f", 00:19:19.538 "assigned_rate_limits": { 00:19:19.538 "rw_ios_per_sec": 0, 00:19:19.538 "rw_mbytes_per_sec": 0, 00:19:19.538 "r_mbytes_per_sec": 0, 00:19:19.538 "w_mbytes_per_sec": 0 00:19:19.538 }, 00:19:19.538 "claimed": false, 00:19:19.538 "zoned": false, 00:19:19.538 "supported_io_types": { 00:19:19.538 "read": true, 00:19:19.538 "write": true, 00:19:19.538 "unmap": true, 00:19:19.538 "flush": false, 00:19:19.538 "reset": true, 00:19:19.538 "nvme_admin": false, 00:19:19.538 "nvme_io": false, 00:19:19.538 "nvme_io_md": false, 00:19:19.538 "write_zeroes": true, 00:19:19.538 "zcopy": false, 00:19:19.538 "get_zone_info": false, 00:19:19.538 "zone_management": false, 00:19:19.538 "zone_append": false, 00:19:19.538 "compare": false, 00:19:19.538 "compare_and_write": false, 00:19:19.538 "abort": false, 00:19:19.538 "seek_hole": true, 00:19:19.538 "seek_data": true, 00:19:19.538 "copy": false, 00:19:19.538 "nvme_iov_md": false 00:19:19.538 }, 00:19:19.538 "driver_specific": { 00:19:19.538 "lvol": { 00:19:19.538 "lvol_store_uuid": "57cfb3f1-2f3b-497b-aff4-d83ccacc3532", 00:19:19.538 "base_bdev": "nvme0n1", 00:19:19.538 "thin_provision": true, 00:19:19.538 "num_allocated_clusters": 0, 00:19:19.538 "snapshot": false, 00:19:19.538 "clone": false, 00:19:19.538 "esnap_clone": false 00:19:19.538 } 00:19:19.538 } 00:19:19.538 } 00:19:19.538 ]' 00:19:19.538 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:19.538 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:19.538 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=975efcc6-4c97-47c9-894f-3a913b05576f 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:19.799 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 975efcc6-4c97-47c9-894f-3a913b05576f 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:20.059 { 00:19:20.059 "name": "975efcc6-4c97-47c9-894f-3a913b05576f", 00:19:20.059 "aliases": [ 00:19:20.059 "lvs/nvme0n1p0" 00:19:20.059 ], 00:19:20.059 "product_name": "Logical Volume", 00:19:20.059 "block_size": 4096, 00:19:20.059 "num_blocks": 26476544, 00:19:20.059 "uuid": "975efcc6-4c97-47c9-894f-3a913b05576f", 00:19:20.059 "assigned_rate_limits": { 00:19:20.059 "rw_ios_per_sec": 0, 00:19:20.059 "rw_mbytes_per_sec": 0, 00:19:20.059 "r_mbytes_per_sec": 0, 00:19:20.059 "w_mbytes_per_sec": 0 00:19:20.059 }, 00:19:20.059 "claimed": false, 00:19:20.059 "zoned": false, 00:19:20.059 "supported_io_types": { 00:19:20.059 "read": true, 00:19:20.059 "write": true, 00:19:20.059 "unmap": true, 00:19:20.059 "flush": false, 00:19:20.059 "reset": true, 00:19:20.059 "nvme_admin": false, 00:19:20.059 "nvme_io": false, 00:19:20.059 "nvme_io_md": false, 00:19:20.059 "write_zeroes": true, 00:19:20.059 "zcopy": false, 00:19:20.059 "get_zone_info": false, 00:19:20.059 "zone_management": false, 00:19:20.059 "zone_append": false, 00:19:20.059 "compare": false, 00:19:20.059 "compare_and_write": false, 00:19:20.059 "abort": false, 00:19:20.059 "seek_hole": true, 00:19:20.059 "seek_data": true, 00:19:20.059 "copy": false, 00:19:20.059 "nvme_iov_md": false 00:19:20.059 }, 00:19:20.059 "driver_specific": { 00:19:20.059 "lvol": { 00:19:20.059 "lvol_store_uuid": "57cfb3f1-2f3b-497b-aff4-d83ccacc3532", 00:19:20.059 "base_bdev": "nvme0n1", 00:19:20.059 "thin_provision": true, 00:19:20.059 "num_allocated_clusters": 0, 00:19:20.059 "snapshot": false, 00:19:20.059 "clone": false, 00:19:20.059 "esnap_clone": false 00:19:20.059 } 00:19:20.059 } 00:19:20.059 } 00:19:20.059 ]' 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:20.059 12:51:45 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:20.320 12:51:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:20.320 12:51:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 975efcc6-4c97-47c9-894f-3a913b05576f 00:19:20.320 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=975efcc6-4c97-47c9-894f-3a913b05576f 00:19:20.320 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:20.320 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:20.320 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:20.320 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 975efcc6-4c97-47c9-894f-3a913b05576f 00:19:20.580 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:20.580 { 00:19:20.580 "name": "975efcc6-4c97-47c9-894f-3a913b05576f", 00:19:20.580 "aliases": [ 00:19:20.580 "lvs/nvme0n1p0" 00:19:20.580 ], 00:19:20.580 "product_name": "Logical Volume", 00:19:20.580 "block_size": 4096, 00:19:20.580 "num_blocks": 26476544, 00:19:20.580 "uuid": "975efcc6-4c97-47c9-894f-3a913b05576f", 00:19:20.580 "assigned_rate_limits": { 00:19:20.580 "rw_ios_per_sec": 0, 00:19:20.580 "rw_mbytes_per_sec": 0, 00:19:20.580 "r_mbytes_per_sec": 0, 00:19:20.580 "w_mbytes_per_sec": 0 00:19:20.580 }, 00:19:20.580 "claimed": false, 00:19:20.580 "zoned": false, 00:19:20.580 "supported_io_types": { 00:19:20.580 "read": true, 00:19:20.580 "write": true, 00:19:20.580 "unmap": true, 00:19:20.580 "flush": false, 00:19:20.580 "reset": true, 00:19:20.580 "nvme_admin": false, 00:19:20.580 "nvme_io": false, 00:19:20.580 "nvme_io_md": false, 00:19:20.580 "write_zeroes": true, 00:19:20.580 "zcopy": false, 00:19:20.580 "get_zone_info": false, 00:19:20.580 "zone_management": false, 00:19:20.580 "zone_append": false, 00:19:20.580 "compare": false, 00:19:20.580 "compare_and_write": false, 00:19:20.580 "abort": false, 00:19:20.580 "seek_hole": true, 00:19:20.580 "seek_data": true, 00:19:20.580 "copy": false, 00:19:20.580 "nvme_iov_md": false 00:19:20.580 }, 00:19:20.580 "driver_specific": { 00:19:20.580 "lvol": { 00:19:20.580 "lvol_store_uuid": "57cfb3f1-2f3b-497b-aff4-d83ccacc3532", 00:19:20.580 "base_bdev": "nvme0n1", 00:19:20.580 "thin_provision": true, 00:19:20.580 "num_allocated_clusters": 0, 00:19:20.580 "snapshot": false, 00:19:20.580 "clone": false, 00:19:20.580 "esnap_clone": false 00:19:20.580 } 00:19:20.580 } 00:19:20.580 } 00:19:20.580 ]' 00:19:20.580 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:20.580 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:20.580 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:20.580 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:20.580 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:20.580 12:51:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:20.580 12:51:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:20.580 12:51:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 975efcc6-4c97-47c9-894f-3a913b05576f -c nvc0n1p0 --l2p_dram_limit 20 00:19:20.843 [2024-11-20 12:51:46.176979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.177017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.843 [2024-11-20 12:51:46.177029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:20.843 [2024-11-20 12:51:46.177037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.177077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.177088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.843 [2024-11-20 12:51:46.177095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:20.843 [2024-11-20 12:51:46.177102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.177115] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.843 [2024-11-20 12:51:46.177674] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.843 [2024-11-20 12:51:46.177686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.177693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.843 [2024-11-20 12:51:46.177700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:19:20.843 [2024-11-20 12:51:46.177708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.177755] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7ad62697-b95a-4a36-b555-47ad197c16d6 00:19:20.843 [2024-11-20 12:51:46.178693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.178718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:20.843 [2024-11-20 12:51:46.178727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:20.843 [2024-11-20 12:51:46.178736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.183444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.183466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.843 [2024-11-20 12:51:46.183475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:19:20.843 [2024-11-20 12:51:46.183481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.183549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.183557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.843 [2024-11-20 12:51:46.183567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:20.843 [2024-11-20 12:51:46.183573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.183608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.183616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.843 [2024-11-20 12:51:46.183623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:20.843 [2024-11-20 12:51:46.183628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.183645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.843 [2024-11-20 12:51:46.186475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.186498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.843 [2024-11-20 12:51:46.186506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:19:20.843 [2024-11-20 12:51:46.186514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.186539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.186546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.843 [2024-11-20 12:51:46.186552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:20.843 [2024-11-20 12:51:46.186559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.186575] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:20.843 [2024-11-20 12:51:46.186684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.843 [2024-11-20 12:51:46.186696] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.843 [2024-11-20 12:51:46.186705] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.843 [2024-11-20 12:51:46.186713] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.843 [2024-11-20 12:51:46.186721] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.843 [2024-11-20 12:51:46.186727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:20.843 [2024-11-20 12:51:46.186734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.843 [2024-11-20 12:51:46.186750] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.843 [2024-11-20 12:51:46.186757] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.843 [2024-11-20 12:51:46.186763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.186772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.843 [2024-11-20 12:51:46.186778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:19:20.843 [2024-11-20 12:51:46.186787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.186849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.843 [2024-11-20 12:51:46.186856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.843 [2024-11-20 12:51:46.186862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:20.843 [2024-11-20 12:51:46.186871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.843 [2024-11-20 12:51:46.186939] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.843 [2024-11-20 12:51:46.186951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.843 [2024-11-20 12:51:46.186959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.843 [2024-11-20 12:51:46.186967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.843 [2024-11-20 12:51:46.186973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.843 [2024-11-20 12:51:46.186979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.843 [2024-11-20 12:51:46.186984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:20.843 [2024-11-20 12:51:46.186991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.843 [2024-11-20 12:51:46.186997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.843 [2024-11-20 12:51:46.187009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.843 [2024-11-20 12:51:46.187015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:20.843 [2024-11-20 12:51:46.187020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.843 [2024-11-20 12:51:46.187031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.843 [2024-11-20 12:51:46.187037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:20.843 [2024-11-20 12:51:46.187044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.843 [2024-11-20 12:51:46.187057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:20.843 [2024-11-20 12:51:46.187062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.843 [2024-11-20 12:51:46.187074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.843 [2024-11-20 12:51:46.187085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.843 [2024-11-20 12:51:46.187090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.843 [2024-11-20 12:51:46.187102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.843 [2024-11-20 12:51:46.187107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.843 [2024-11-20 12:51:46.187117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.843 [2024-11-20 12:51:46.187124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.843 [2024-11-20 12:51:46.187136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.843 [2024-11-20 12:51:46.187141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:20.843 [2024-11-20 12:51:46.187147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.843 [2024-11-20 12:51:46.187152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.843 [2024-11-20 12:51:46.187159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:20.843 [2024-11-20 12:51:46.187163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.844 [2024-11-20 12:51:46.187170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.844 [2024-11-20 12:51:46.187175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:20.844 [2024-11-20 12:51:46.187181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.844 [2024-11-20 12:51:46.187187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.844 [2024-11-20 12:51:46.187194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:20.844 [2024-11-20 12:51:46.187199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.844 [2024-11-20 12:51:46.187206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.844 [2024-11-20 12:51:46.187212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.844 [2024-11-20 12:51:46.187220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.844 [2024-11-20 12:51:46.187225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.844 [2024-11-20 12:51:46.187234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.844 [2024-11-20 12:51:46.187239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.844 [2024-11-20 12:51:46.187245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.844 [2024-11-20 12:51:46.187250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.844 [2024-11-20 12:51:46.187257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.844 [2024-11-20 12:51:46.187261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.844 [2024-11-20 12:51:46.187271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.844 [2024-11-20 12:51:46.187278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.844 [2024-11-20 12:51:46.187285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:20.844 [2024-11-20 12:51:46.187291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:20.844 [2024-11-20 12:51:46.187297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:20.844 [2024-11-20 12:51:46.187303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:20.844 [2024-11-20 12:51:46.187310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:20.844 [2024-11-20 12:51:46.187315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:20.844 [2024-11-20 12:51:46.187322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:20.844 [2024-11-20 12:51:46.187327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:20.844 [2024-11-20 12:51:46.187335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:20.844 [2024-11-20 12:51:46.187340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:20.844 [2024-11-20 12:51:46.187346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:20.844 [2024-11-20 12:51:46.187352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:20.844 [2024-11-20 12:51:46.187360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:20.844 [2024-11-20 12:51:46.187365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:20.844 [2024-11-20 12:51:46.187373] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.844 [2024-11-20 12:51:46.187379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.844 [2024-11-20 12:51:46.187386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.844 [2024-11-20 12:51:46.187393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.844 [2024-11-20 12:51:46.187400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.844 [2024-11-20 12:51:46.187420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.844 [2024-11-20 12:51:46.187428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.844 [2024-11-20 12:51:46.187435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.844 [2024-11-20 12:51:46.187442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:19:20.844 [2024-11-20 12:51:46.187448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.844 [2024-11-20 12:51:46.187475] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:20.844 [2024-11-20 12:51:46.187483] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:25.052 [2024-11-20 12:51:50.000169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.052 [2024-11-20 12:51:50.000264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:25.052 [2024-11-20 12:51:50.000299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3812.663 ms 00:19:25.052 [2024-11-20 12:51:50.000312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.052 [2024-11-20 12:51:50.033342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.052 [2024-11-20 12:51:50.033405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.052 [2024-11-20 12:51:50.033425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.526 ms 00:19:25.052 [2024-11-20 12:51:50.033434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.052 [2024-11-20 12:51:50.033595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.052 [2024-11-20 12:51:50.033607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:25.052 [2024-11-20 12:51:50.033621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:25.052 [2024-11-20 12:51:50.033630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.052 [2024-11-20 12:51:50.081474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.052 [2024-11-20 12:51:50.081536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.052 [2024-11-20 12:51:50.081555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.804 ms 00:19:25.052 [2024-11-20 12:51:50.081564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.052 [2024-11-20 12:51:50.081622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.052 [2024-11-20 12:51:50.081636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.052 [2024-11-20 12:51:50.081647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:25.052 [2024-11-20 12:51:50.081656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.052 [2024-11-20 12:51:50.082319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.052 [2024-11-20 12:51:50.082358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.052 [2024-11-20 12:51:50.082372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:19:25.052 [2024-11-20 12:51:50.082380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.052 [2024-11-20 12:51:50.082523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.052 [2024-11-20 12:51:50.082533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.053 [2024-11-20 12:51:50.082547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:25.053 [2024-11-20 12:51:50.082555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.098570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.098614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.053 [2024-11-20 12:51:50.098627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.994 ms 00:19:25.053 [2024-11-20 12:51:50.098635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.112230] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:25.053 [2024-11-20 12:51:50.119509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.119553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:25.053 [2024-11-20 12:51:50.119565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.782 ms 00:19:25.053 [2024-11-20 12:51:50.119578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.216077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.216138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:25.053 [2024-11-20 12:51:50.216154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.466 ms 00:19:25.053 [2024-11-20 12:51:50.216166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.216374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.216391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:25.053 [2024-11-20 12:51:50.216401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:19:25.053 [2024-11-20 12:51:50.216411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.242370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.242424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:25.053 [2024-11-20 12:51:50.242439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.903 ms 00:19:25.053 [2024-11-20 12:51:50.242449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.267425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.267474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:25.053 [2024-11-20 12:51:50.267487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.914 ms 00:19:25.053 [2024-11-20 12:51:50.267497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.268155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.268178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:25.053 [2024-11-20 12:51:50.268187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:19:25.053 [2024-11-20 12:51:50.268197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.348712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.348782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:25.053 [2024-11-20 12:51:50.348797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.474 ms 00:19:25.053 [2024-11-20 12:51:50.348809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.376387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.376452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:25.053 [2024-11-20 12:51:50.376465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.484 ms 00:19:25.053 [2024-11-20 12:51:50.376479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.402124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.402171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:25.053 [2024-11-20 12:51:50.402182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.596 ms 00:19:25.053 [2024-11-20 12:51:50.402193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.428588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.428644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:25.053 [2024-11-20 12:51:50.428657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.348 ms 00:19:25.053 [2024-11-20 12:51:50.428668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.428722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.428758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:25.053 [2024-11-20 12:51:50.428769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:25.053 [2024-11-20 12:51:50.428780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.428876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.053 [2024-11-20 12:51:50.428888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:25.053 [2024-11-20 12:51:50.428897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:25.053 [2024-11-20 12:51:50.428908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.053 [2024-11-20 12:51:50.430715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4253.224 ms, result 0 00:19:25.053 { 00:19:25.053 "name": "ftl0", 00:19:25.053 "uuid": "7ad62697-b95a-4a36-b555-47ad197c16d6" 00:19:25.053 } 00:19:25.053 12:51:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:25.053 12:51:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:25.053 12:51:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:25.315 12:51:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:25.315 [2024-11-20 12:51:50.758230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:25.315 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:25.315 Zero copy mechanism will not be used. 00:19:25.315 Running I/O for 4 seconds... 00:19:27.649 1049.00 IOPS, 69.66 MiB/s [2024-11-20T12:51:54.114Z] 1027.50 IOPS, 68.23 MiB/s [2024-11-20T12:51:55.058Z] 1054.00 IOPS, 69.99 MiB/s [2024-11-20T12:51:55.058Z] 1134.75 IOPS, 75.35 MiB/s 00:19:29.539 Latency(us) 00:19:29.539 [2024-11-20T12:51:55.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.539 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:29.539 ftl0 : 4.00 1134.35 75.33 0.00 0.00 927.41 275.69 3377.62 00:19:29.539 [2024-11-20T12:51:55.058Z] =================================================================================================================== 00:19:29.539 [2024-11-20T12:51:55.058Z] Total : 1134.35 75.33 0.00 0.00 927.41 275.69 3377.62 00:19:29.539 [2024-11-20 12:51:54.769930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:29.539 { 00:19:29.539 "results": [ 00:19:29.539 { 00:19:29.539 "job": "ftl0", 00:19:29.539 "core_mask": "0x1", 00:19:29.539 "workload": "randwrite", 00:19:29.539 "status": "finished", 00:19:29.539 "queue_depth": 1, 00:19:29.539 "io_size": 69632, 00:19:29.539 "runtime": 4.002298, 00:19:29.539 "iops": 1134.3483168919456, 00:19:29.539 "mibps": 75.32781791860576, 00:19:29.539 "io_failed": 0, 00:19:29.539 "io_timeout": 0, 00:19:29.539 "avg_latency_us": 927.4140020332092, 00:19:29.539 "min_latency_us": 275.6923076923077, 00:19:29.539 "max_latency_us": 3377.6246153846155 00:19:29.539 } 00:19:29.539 ], 00:19:29.539 "core_count": 1 00:19:29.539 } 00:19:29.539 12:51:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:29.539 [2024-11-20 12:51:54.886720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:29.539 Running I/O for 4 seconds... 00:19:31.426 5673.00 IOPS, 22.16 MiB/s [2024-11-20T12:51:58.334Z] 5111.00 IOPS, 19.96 MiB/s [2024-11-20T12:51:58.907Z] 4932.00 IOPS, 19.27 MiB/s [2024-11-20T12:51:59.172Z] 4822.25 IOPS, 18.84 MiB/s 00:19:33.653 Latency(us) 00:19:33.653 [2024-11-20T12:51:59.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.653 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:33.653 ftl0 : 4.03 4812.23 18.80 0.00 0.00 26493.91 302.47 52832.10 00:19:33.653 [2024-11-20T12:51:59.172Z] =================================================================================================================== 00:19:33.653 [2024-11-20T12:51:59.172Z] Total : 4812.23 18.80 0.00 0.00 26493.91 0.00 52832.10 00:19:33.653 { 00:19:33.653 "results": [ 00:19:33.653 { 00:19:33.653 "job": "ftl0", 00:19:33.653 "core_mask": "0x1", 00:19:33.653 "workload": "randwrite", 00:19:33.653 "status": "finished", 00:19:33.653 "queue_depth": 128, 00:19:33.653 "io_size": 4096, 00:19:33.653 "runtime": 4.033061, 00:19:33.653 "iops": 4812.225751110633, 00:19:33.653 "mibps": 18.79775684027591, 00:19:33.653 "io_failed": 0, 00:19:33.654 "io_timeout": 0, 00:19:33.654 "avg_latency_us": 26493.907672014713, 00:19:33.654 "min_latency_us": 302.4738461538462, 00:19:33.654 "max_latency_us": 52832.09846153846 00:19:33.654 } 00:19:33.654 ], 00:19:33.654 "core_count": 1 00:19:33.654 } 00:19:33.654 [2024-11-20 12:51:58.930186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:33.654 12:51:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:33.654 [2024-11-20 12:51:59.050034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:33.654 Running I/O for 4 seconds... 00:19:35.625 4331.00 IOPS, 16.92 MiB/s [2024-11-20T12:52:02.087Z] 4324.00 IOPS, 16.89 MiB/s [2024-11-20T12:52:03.474Z] 4296.33 IOPS, 16.78 MiB/s [2024-11-20T12:52:03.474Z] 4278.00 IOPS, 16.71 MiB/s 00:19:37.955 Latency(us) 00:19:37.955 [2024-11-20T12:52:03.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.955 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.955 Verification LBA range: start 0x0 length 0x1400000 00:19:37.955 ftl0 : 4.01 4294.20 16.77 0.00 0.00 29732.27 341.86 40934.79 00:19:37.955 [2024-11-20T12:52:03.474Z] =================================================================================================================== 00:19:37.955 [2024-11-20T12:52:03.474Z] Total : 4294.20 16.77 0.00 0.00 29732.27 0.00 40934.79 00:19:37.955 [2024-11-20 12:52:03.078794] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:37.955 { 00:19:37.955 "results": [ 00:19:37.955 { 00:19:37.955 "job": "ftl0", 00:19:37.955 "core_mask": "0x1", 00:19:37.955 "workload": "verify", 00:19:37.955 "status": "finished", 00:19:37.955 "verify_range": { 00:19:37.955 "start": 0, 00:19:37.955 "length": 20971520 00:19:37.955 }, 00:19:37.955 "queue_depth": 128, 00:19:37.955 "io_size": 4096, 00:19:37.955 "runtime": 4.012621, 00:19:37.955 "iops": 4294.200723168224, 00:19:37.955 "mibps": 16.774221574875874, 00:19:37.955 "io_failed": 0, 00:19:37.955 "io_timeout": 0, 00:19:37.955 "avg_latency_us": 29732.267902840587, 00:19:37.955 "min_latency_us": 341.85846153846154, 00:19:37.955 "max_latency_us": 40934.79384615384 00:19:37.955 } 00:19:37.955 ], 00:19:37.955 "core_count": 1 00:19:37.955 } 00:19:37.955 12:52:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:37.955 [2024-11-20 12:52:03.293672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.955 [2024-11-20 12:52:03.293752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.955 [2024-11-20 12:52:03.293770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:37.956 [2024-11-20 12:52:03.293781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.956 [2024-11-20 12:52:03.293804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.956 [2024-11-20 12:52:03.296927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.956 [2024-11-20 12:52:03.296976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.956 [2024-11-20 12:52:03.296991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:19:37.956 [2024-11-20 12:52:03.297000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.956 [2024-11-20 12:52:03.300026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.956 [2024-11-20 12:52:03.300075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.956 [2024-11-20 12:52:03.300089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:19:37.956 [2024-11-20 12:52:03.300097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.217 [2024-11-20 12:52:03.604092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.217 [2024-11-20 12:52:03.604159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:38.217 [2024-11-20 12:52:03.604183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 303.956 ms 00:19:38.217 [2024-11-20 12:52:03.604194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.217 [2024-11-20 12:52:03.610401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.217 [2024-11-20 12:52:03.610446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:38.217 [2024-11-20 12:52:03.610460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.155 ms 00:19:38.217 [2024-11-20 12:52:03.610468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.217 [2024-11-20 12:52:03.636779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.217 [2024-11-20 12:52:03.636832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:38.217 [2024-11-20 12:52:03.636848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.229 ms 00:19:38.217 [2024-11-20 12:52:03.636855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.217 [2024-11-20 12:52:03.653890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.217 [2024-11-20 12:52:03.653944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:38.217 [2024-11-20 12:52:03.653963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.983 ms 00:19:38.217 [2024-11-20 12:52:03.653971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.217 [2024-11-20 12:52:03.654133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.217 [2024-11-20 12:52:03.654146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:38.217 [2024-11-20 12:52:03.654160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:38.217 [2024-11-20 12:52:03.654168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.217 [2024-11-20 12:52:03.679816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.217 [2024-11-20 12:52:03.679864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:38.218 [2024-11-20 12:52:03.679879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.628 ms 00:19:38.218 [2024-11-20 12:52:03.679887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.218 [2024-11-20 12:52:03.704944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.218 [2024-11-20 12:52:03.704994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:38.218 [2024-11-20 12:52:03.705008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.009 ms 00:19:38.218 [2024-11-20 12:52:03.705015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.218 [2024-11-20 12:52:03.730220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.218 [2024-11-20 12:52:03.730268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:38.218 [2024-11-20 12:52:03.730283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.155 ms 00:19:38.218 [2024-11-20 12:52:03.730290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.481 [2024-11-20 12:52:03.754844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.481 [2024-11-20 12:52:03.754890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:38.481 [2024-11-20 12:52:03.754908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.435 ms 00:19:38.481 [2024-11-20 12:52:03.754915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.481 [2024-11-20 12:52:03.754961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:38.481 [2024-11-20 12:52:03.754977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.754990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.754999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.755009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.755016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.755026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.755033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.755061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:38.481 [2024-11-20 12:52:03.755068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:38.482 [2024-11-20 12:52:03.755911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:38.483 [2024-11-20 12:52:03.755923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:38.483 [2024-11-20 12:52:03.755931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:38.483 [2024-11-20 12:52:03.755941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:38.483 [2024-11-20 12:52:03.755958] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:38.483 [2024-11-20 12:52:03.755968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ad62697-b95a-4a36-b555-47ad197c16d6 00:19:38.483 [2024-11-20 12:52:03.755978] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:38.483 [2024-11-20 12:52:03.755987] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:38.483 [2024-11-20 12:52:03.755996] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:38.483 [2024-11-20 12:52:03.756006] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:38.483 [2024-11-20 12:52:03.756016] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:38.483 [2024-11-20 12:52:03.756026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:38.483 [2024-11-20 12:52:03.756034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:38.483 [2024-11-20 12:52:03.756044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:38.483 [2024-11-20 12:52:03.756051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:38.483 [2024-11-20 12:52:03.756060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.483 [2024-11-20 12:52:03.756067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:38.483 [2024-11-20 12:52:03.756079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:19:38.483 [2024-11-20 12:52:03.756086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.769825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.483 [2024-11-20 12:52:03.769872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:38.483 [2024-11-20 12:52:03.769886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.699 ms 00:19:38.483 [2024-11-20 12:52:03.769894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.770299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.483 [2024-11-20 12:52:03.770321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:38.483 [2024-11-20 12:52:03.770333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:19:38.483 [2024-11-20 12:52:03.770341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.808925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.808975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.483 [2024-11-20 12:52:03.808991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.808999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.809070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.809079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.483 [2024-11-20 12:52:03.809090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.809098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.809174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.809189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.483 [2024-11-20 12:52:03.809199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.809207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.809225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.809233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.483 [2024-11-20 12:52:03.809244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.809251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.892442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.892501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.483 [2024-11-20 12:52:03.892520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.892528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.960321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.960383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.483 [2024-11-20 12:52:03.960399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.960408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.960545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.960558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.483 [2024-11-20 12:52:03.960572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.960580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.960627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.960637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.483 [2024-11-20 12:52:03.960647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.960655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.960783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.960795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.483 [2024-11-20 12:52:03.960811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.960819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.960855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.960864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.483 [2024-11-20 12:52:03.960875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.960883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.960927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.960937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.483 [2024-11-20 12:52:03.960948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.960958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.961011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.483 [2024-11-20 12:52:03.961044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.483 [2024-11-20 12:52:03.961056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.483 [2024-11-20 12:52:03.961064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.483 [2024-11-20 12:52:03.961213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 667.490 ms, result 0 00:19:38.483 true 00:19:38.483 12:52:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76044 00:19:38.483 12:52:03 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76044 ']' 00:19:38.483 12:52:03 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76044 00:19:38.483 12:52:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:38.483 12:52:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.745 12:52:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76044 00:19:38.745 12:52:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.745 killing process with pid 76044 00:19:38.745 12:52:04 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.745 12:52:04 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76044' 00:19:38.745 Received shutdown signal, test time was about 4.000000 seconds 00:19:38.745 00:19:38.745 Latency(us) 00:19:38.745 [2024-11-20T12:52:04.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.745 [2024-11-20T12:52:04.264Z] =================================================================================================================== 00:19:38.745 [2024-11-20T12:52:04.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.745 12:52:04 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76044 00:19:38.745 12:52:04 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76044 00:19:44.034 Remove shared memory files 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:44.034 ************************************ 00:19:44.034 END TEST ftl_bdevperf 00:19:44.034 ************************************ 00:19:44.034 00:19:44.034 real 0m26.652s 00:19:44.034 user 0m28.974s 00:19:44.034 sys 0m1.084s 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.034 12:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.034 12:52:08 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.034 12:52:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:44.034 12:52:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.034 12:52:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:44.034 ************************************ 00:19:44.034 START TEST ftl_trim 00:19:44.034 ************************************ 00:19:44.034 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:44.034 * Looking for test storage... 00:19:44.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.034 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.034 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.034 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.034 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:44.034 12:52:09 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.035 12:52:09 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.035 --rc genhtml_branch_coverage=1 00:19:44.035 --rc genhtml_function_coverage=1 00:19:44.035 --rc genhtml_legend=1 00:19:44.035 --rc geninfo_all_blocks=1 00:19:44.035 --rc geninfo_unexecuted_blocks=1 00:19:44.035 00:19:44.035 ' 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.035 --rc genhtml_branch_coverage=1 00:19:44.035 --rc genhtml_function_coverage=1 00:19:44.035 --rc genhtml_legend=1 00:19:44.035 --rc geninfo_all_blocks=1 00:19:44.035 --rc geninfo_unexecuted_blocks=1 00:19:44.035 00:19:44.035 ' 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.035 --rc genhtml_branch_coverage=1 00:19:44.035 --rc genhtml_function_coverage=1 00:19:44.035 --rc genhtml_legend=1 00:19:44.035 --rc geninfo_all_blocks=1 00:19:44.035 --rc geninfo_unexecuted_blocks=1 00:19:44.035 00:19:44.035 ' 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.035 --rc genhtml_branch_coverage=1 00:19:44.035 --rc genhtml_function_coverage=1 00:19:44.035 --rc genhtml_legend=1 00:19:44.035 --rc geninfo_all_blocks=1 00:19:44.035 --rc geninfo_unexecuted_blocks=1 00:19:44.035 00:19:44.035 ' 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76397 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76397 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76397 ']' 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.035 12:52:09 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.035 12:52:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:44.035 [2024-11-20 12:52:09.273721] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:19:44.035 [2024-11-20 12:52:09.274103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76397 ] 00:19:44.035 [2024-11-20 12:52:09.436779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:44.297 [2024-11-20 12:52:09.563627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.297 [2024-11-20 12:52:09.563928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.297 [2024-11-20 12:52:09.564019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.869 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.869 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:44.869 12:52:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:44.869 12:52:10 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:44.869 12:52:10 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:44.869 12:52:10 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:44.869 12:52:10 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:44.869 12:52:10 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:45.130 12:52:10 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:45.130 12:52:10 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:45.130 12:52:10 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:45.130 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:45.130 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.130 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:45.130 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:45.130 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:45.392 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.392 { 00:19:45.392 "name": "nvme0n1", 00:19:45.392 "aliases": [ 00:19:45.392 "4141e5e0-15bf-429e-8f0c-1d6cf14c407e" 00:19:45.392 ], 00:19:45.392 "product_name": "NVMe disk", 00:19:45.392 "block_size": 4096, 00:19:45.392 "num_blocks": 1310720, 00:19:45.392 "uuid": "4141e5e0-15bf-429e-8f0c-1d6cf14c407e", 00:19:45.392 "numa_id": -1, 00:19:45.392 "assigned_rate_limits": { 00:19:45.392 "rw_ios_per_sec": 0, 00:19:45.392 "rw_mbytes_per_sec": 0, 00:19:45.392 "r_mbytes_per_sec": 0, 00:19:45.392 "w_mbytes_per_sec": 0 00:19:45.392 }, 00:19:45.392 "claimed": true, 00:19:45.392 "claim_type": "read_many_write_one", 00:19:45.392 "zoned": false, 00:19:45.392 "supported_io_types": { 00:19:45.392 "read": true, 00:19:45.392 "write": true, 00:19:45.392 "unmap": true, 00:19:45.392 "flush": true, 00:19:45.392 "reset": true, 00:19:45.392 "nvme_admin": true, 00:19:45.392 "nvme_io": true, 00:19:45.392 "nvme_io_md": false, 00:19:45.392 "write_zeroes": true, 00:19:45.392 "zcopy": false, 00:19:45.392 "get_zone_info": false, 00:19:45.392 "zone_management": false, 00:19:45.392 "zone_append": false, 00:19:45.392 "compare": true, 00:19:45.392 "compare_and_write": false, 00:19:45.392 "abort": true, 00:19:45.392 "seek_hole": false, 00:19:45.392 "seek_data": false, 00:19:45.392 "copy": true, 00:19:45.392 "nvme_iov_md": false 00:19:45.392 }, 00:19:45.392 "driver_specific": { 00:19:45.392 "nvme": [ 00:19:45.392 { 00:19:45.392 "pci_address": "0000:00:11.0", 00:19:45.392 "trid": { 00:19:45.392 "trtype": "PCIe", 00:19:45.392 "traddr": "0000:00:11.0" 00:19:45.392 }, 00:19:45.392 "ctrlr_data": { 00:19:45.392 "cntlid": 0, 00:19:45.392 "vendor_id": "0x1b36", 00:19:45.392 "model_number": "QEMU NVMe Ctrl", 00:19:45.392 "serial_number": "12341", 00:19:45.392 "firmware_revision": "8.0.0", 00:19:45.392 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:45.392 "oacs": { 00:19:45.392 "security": 0, 00:19:45.392 "format": 1, 00:19:45.392 "firmware": 0, 00:19:45.392 "ns_manage": 1 00:19:45.392 }, 00:19:45.392 "multi_ctrlr": false, 00:19:45.392 "ana_reporting": false 00:19:45.392 }, 00:19:45.392 "vs": { 00:19:45.392 "nvme_version": "1.4" 00:19:45.392 }, 00:19:45.392 "ns_data": { 00:19:45.392 "id": 1, 00:19:45.392 "can_share": false 00:19:45.392 } 00:19:45.392 } 00:19:45.392 ], 00:19:45.392 "mp_policy": "active_passive" 00:19:45.392 } 00:19:45.392 } 00:19:45.392 ]' 00:19:45.392 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.392 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.392 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.392 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:45.392 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:45.392 12:52:10 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:45.392 12:52:10 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:45.392 12:52:10 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:45.392 12:52:10 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:45.392 12:52:10 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:45.392 12:52:10 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:45.652 12:52:11 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=57cfb3f1-2f3b-497b-aff4-d83ccacc3532 00:19:45.652 12:52:11 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:45.652 12:52:11 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57cfb3f1-2f3b-497b-aff4-d83ccacc3532 00:19:45.914 12:52:11 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:46.175 12:52:11 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7d01b25a-0873-4026-91a8-9e3080038c05 00:19:46.175 12:52:11 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7d01b25a-0873-4026-91a8-9e3080038c05 00:19:46.436 12:52:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.436 12:52:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.436 12:52:11 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:46.436 12:52:11 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:46.436 12:52:11 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.436 12:52:11 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:46.436 12:52:11 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.436 12:52:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.436 12:52:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.436 12:52:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:46.436 12:52:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:46.436 12:52:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.698 12:52:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.698 { 00:19:46.698 "name": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:46.698 "aliases": [ 00:19:46.698 "lvs/nvme0n1p0" 00:19:46.698 ], 00:19:46.698 "product_name": "Logical Volume", 00:19:46.698 "block_size": 4096, 00:19:46.698 "num_blocks": 26476544, 00:19:46.698 "uuid": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:46.698 "assigned_rate_limits": { 00:19:46.698 "rw_ios_per_sec": 0, 00:19:46.698 "rw_mbytes_per_sec": 0, 00:19:46.698 "r_mbytes_per_sec": 0, 00:19:46.698 "w_mbytes_per_sec": 0 00:19:46.698 }, 00:19:46.698 "claimed": false, 00:19:46.698 "zoned": false, 00:19:46.698 "supported_io_types": { 00:19:46.698 "read": true, 00:19:46.698 "write": true, 00:19:46.698 "unmap": true, 00:19:46.698 "flush": false, 00:19:46.698 "reset": true, 00:19:46.698 "nvme_admin": false, 00:19:46.698 "nvme_io": false, 00:19:46.698 "nvme_io_md": false, 00:19:46.698 "write_zeroes": true, 00:19:46.698 "zcopy": false, 00:19:46.698 "get_zone_info": false, 00:19:46.698 "zone_management": false, 00:19:46.698 "zone_append": false, 00:19:46.698 "compare": false, 00:19:46.699 "compare_and_write": false, 00:19:46.699 "abort": false, 00:19:46.699 "seek_hole": true, 00:19:46.699 "seek_data": true, 00:19:46.699 "copy": false, 00:19:46.699 "nvme_iov_md": false 00:19:46.699 }, 00:19:46.699 "driver_specific": { 00:19:46.699 "lvol": { 00:19:46.699 "lvol_store_uuid": "7d01b25a-0873-4026-91a8-9e3080038c05", 00:19:46.699 "base_bdev": "nvme0n1", 00:19:46.699 "thin_provision": true, 00:19:46.699 "num_allocated_clusters": 0, 00:19:46.699 "snapshot": false, 00:19:46.699 "clone": false, 00:19:46.699 "esnap_clone": false 00:19:46.699 } 00:19:46.699 } 00:19:46.699 } 00:19:46.699 ]' 00:19:46.699 12:52:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.699 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.699 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.699 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.699 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.699 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.699 12:52:12 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:46.699 12:52:12 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:46.699 12:52:12 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:46.959 12:52:12 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:46.959 12:52:12 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:46.959 12:52:12 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.959 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:46.959 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.959 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:46.959 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:46.959 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:47.222 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:47.222 { 00:19:47.222 "name": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:47.222 "aliases": [ 00:19:47.222 "lvs/nvme0n1p0" 00:19:47.222 ], 00:19:47.222 "product_name": "Logical Volume", 00:19:47.222 "block_size": 4096, 00:19:47.222 "num_blocks": 26476544, 00:19:47.222 "uuid": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:47.222 "assigned_rate_limits": { 00:19:47.222 "rw_ios_per_sec": 0, 00:19:47.222 "rw_mbytes_per_sec": 0, 00:19:47.222 "r_mbytes_per_sec": 0, 00:19:47.222 "w_mbytes_per_sec": 0 00:19:47.222 }, 00:19:47.222 "claimed": false, 00:19:47.222 "zoned": false, 00:19:47.222 "supported_io_types": { 00:19:47.222 "read": true, 00:19:47.222 "write": true, 00:19:47.222 "unmap": true, 00:19:47.222 "flush": false, 00:19:47.222 "reset": true, 00:19:47.222 "nvme_admin": false, 00:19:47.222 "nvme_io": false, 00:19:47.222 "nvme_io_md": false, 00:19:47.222 "write_zeroes": true, 00:19:47.222 "zcopy": false, 00:19:47.222 "get_zone_info": false, 00:19:47.222 "zone_management": false, 00:19:47.222 "zone_append": false, 00:19:47.222 "compare": false, 00:19:47.222 "compare_and_write": false, 00:19:47.222 "abort": false, 00:19:47.222 "seek_hole": true, 00:19:47.222 "seek_data": true, 00:19:47.222 "copy": false, 00:19:47.222 "nvme_iov_md": false 00:19:47.222 }, 00:19:47.222 "driver_specific": { 00:19:47.222 "lvol": { 00:19:47.222 "lvol_store_uuid": "7d01b25a-0873-4026-91a8-9e3080038c05", 00:19:47.222 "base_bdev": "nvme0n1", 00:19:47.222 "thin_provision": true, 00:19:47.222 "num_allocated_clusters": 0, 00:19:47.222 "snapshot": false, 00:19:47.222 "clone": false, 00:19:47.222 "esnap_clone": false 00:19:47.222 } 00:19:47.222 } 00:19:47.222 } 00:19:47.222 ]' 00:19:47.222 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:47.222 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:47.222 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:47.222 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:47.222 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:47.222 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:47.222 12:52:12 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:47.222 12:52:12 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:47.481 12:52:12 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:47.481 12:52:12 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:47.481 12:52:12 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:47.481 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:47.481 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:47.481 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:47.481 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:47.481 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65df5fd2-4de9-4289-ae82-6eaf997111ec 00:19:47.481 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:47.481 { 00:19:47.481 "name": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:47.481 "aliases": [ 00:19:47.481 "lvs/nvme0n1p0" 00:19:47.481 ], 00:19:47.481 "product_name": "Logical Volume", 00:19:47.481 "block_size": 4096, 00:19:47.481 "num_blocks": 26476544, 00:19:47.481 "uuid": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:47.481 "assigned_rate_limits": { 00:19:47.481 "rw_ios_per_sec": 0, 00:19:47.481 "rw_mbytes_per_sec": 0, 00:19:47.481 "r_mbytes_per_sec": 0, 00:19:47.481 "w_mbytes_per_sec": 0 00:19:47.481 }, 00:19:47.481 "claimed": false, 00:19:47.481 "zoned": false, 00:19:47.481 "supported_io_types": { 00:19:47.481 "read": true, 00:19:47.481 "write": true, 00:19:47.481 "unmap": true, 00:19:47.481 "flush": false, 00:19:47.481 "reset": true, 00:19:47.481 "nvme_admin": false, 00:19:47.481 "nvme_io": false, 00:19:47.481 "nvme_io_md": false, 00:19:47.481 "write_zeroes": true, 00:19:47.481 "zcopy": false, 00:19:47.481 "get_zone_info": false, 00:19:47.481 "zone_management": false, 00:19:47.481 "zone_append": false, 00:19:47.481 "compare": false, 00:19:47.481 "compare_and_write": false, 00:19:47.481 "abort": false, 00:19:47.481 "seek_hole": true, 00:19:47.481 "seek_data": true, 00:19:47.481 "copy": false, 00:19:47.481 "nvme_iov_md": false 00:19:47.481 }, 00:19:47.481 "driver_specific": { 00:19:47.481 "lvol": { 00:19:47.481 "lvol_store_uuid": "7d01b25a-0873-4026-91a8-9e3080038c05", 00:19:47.481 "base_bdev": "nvme0n1", 00:19:47.481 "thin_provision": true, 00:19:47.481 "num_allocated_clusters": 0, 00:19:47.481 "snapshot": false, 00:19:47.481 "clone": false, 00:19:47.481 "esnap_clone": false 00:19:47.481 } 00:19:47.481 } 00:19:47.481 } 00:19:47.481 ]' 00:19:47.481 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:47.744 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:47.744 12:52:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:47.744 12:52:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:47.744 12:52:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:47.744 12:52:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:47.744 12:52:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:47.744 12:52:13 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 65df5fd2-4de9-4289-ae82-6eaf997111ec -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:47.744 [2024-11-20 12:52:13.218496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.218530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:47.744 [2024-11-20 12:52:13.218544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:47.744 [2024-11-20 12:52:13.218551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.220768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.220793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.744 [2024-11-20 12:52:13.220802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.193 ms 00:19:47.744 [2024-11-20 12:52:13.220808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.220884] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:47.744 [2024-11-20 12:52:13.221399] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:47.744 [2024-11-20 12:52:13.221416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.221423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.744 [2024-11-20 12:52:13.221430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:47.744 [2024-11-20 12:52:13.221437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.221519] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:19:47.744 [2024-11-20 12:52:13.222434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.222455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:47.744 [2024-11-20 12:52:13.222463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:47.744 [2024-11-20 12:52:13.222470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.227251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.227342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.744 [2024-11-20 12:52:13.227386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:19:47.744 [2024-11-20 12:52:13.227407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.227527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.227711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.744 [2024-11-20 12:52:13.227734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:47.744 [2024-11-20 12:52:13.227766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.227805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.227826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:47.744 [2024-11-20 12:52:13.227877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:47.744 [2024-11-20 12:52:13.227896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.227931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:47.744 [2024-11-20 12:52:13.230769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.230913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.744 [2024-11-20 12:52:13.231007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:19:47.744 [2024-11-20 12:52:13.231081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.231164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.231250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:47.744 [2024-11-20 12:52:13.231302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:47.744 [2024-11-20 12:52:13.231384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.231453] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:47.744 [2024-11-20 12:52:13.231654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:47.744 [2024-11-20 12:52:13.231767] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:47.744 [2024-11-20 12:52:13.231824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:47.744 [2024-11-20 12:52:13.232011] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:47.744 [2024-11-20 12:52:13.232069] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:47.744 [2024-11-20 12:52:13.232149] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:47.744 [2024-11-20 12:52:13.232213] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:47.744 [2024-11-20 12:52:13.232260] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:47.744 [2024-11-20 12:52:13.232382] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:47.744 [2024-11-20 12:52:13.232431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.232557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:47.744 [2024-11-20 12:52:13.232605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:19:47.744 [2024-11-20 12:52:13.232732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.232875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.744 [2024-11-20 12:52:13.232936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:47.744 [2024-11-20 12:52:13.233014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:47.744 [2024-11-20 12:52:13.233058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.744 [2024-11-20 12:52:13.233218] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:47.744 [2024-11-20 12:52:13.233265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:47.744 [2024-11-20 12:52:13.233314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:47.744 [2024-11-20 12:52:13.233390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.744 [2024-11-20 12:52:13.233437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:47.744 [2024-11-20 12:52:13.233544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:47.744 [2024-11-20 12:52:13.233565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:47.744 [2024-11-20 12:52:13.233635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:47.744 [2024-11-20 12:52:13.233684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:47.744 [2024-11-20 12:52:13.233724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:47.744 [2024-11-20 12:52:13.233825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:47.744 [2024-11-20 12:52:13.233893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:47.744 [2024-11-20 12:52:13.233974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:47.744 [2024-11-20 12:52:13.233993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:47.744 [2024-11-20 12:52:13.234062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:47.744 [2024-11-20 12:52:13.234107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.744 [2024-11-20 12:52:13.234173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:47.744 [2024-11-20 12:52:13.234212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:47.744 [2024-11-20 12:52:13.234258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.744 [2024-11-20 12:52:13.234327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:47.744 [2024-11-20 12:52:13.234392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:47.744 [2024-11-20 12:52:13.234410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.744 [2024-11-20 12:52:13.234453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:47.744 [2024-11-20 12:52:13.234489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:47.745 [2024-11-20 12:52:13.234531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.745 [2024-11-20 12:52:13.234571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:47.745 [2024-11-20 12:52:13.234666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:47.745 [2024-11-20 12:52:13.234710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.745 [2024-11-20 12:52:13.234748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:47.745 [2024-11-20 12:52:13.234817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:47.745 [2024-11-20 12:52:13.234829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:47.745 [2024-11-20 12:52:13.234834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:47.745 [2024-11-20 12:52:13.234843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:47.745 [2024-11-20 12:52:13.234848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:47.745 [2024-11-20 12:52:13.234855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:47.745 [2024-11-20 12:52:13.234860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:47.745 [2024-11-20 12:52:13.234866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:47.745 [2024-11-20 12:52:13.234871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:47.745 [2024-11-20 12:52:13.234878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:47.745 [2024-11-20 12:52:13.234883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.745 [2024-11-20 12:52:13.234889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:47.745 [2024-11-20 12:52:13.234894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:47.745 [2024-11-20 12:52:13.234901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.745 [2024-11-20 12:52:13.234906] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:47.745 [2024-11-20 12:52:13.234913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:47.745 [2024-11-20 12:52:13.234919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:47.745 [2024-11-20 12:52:13.234925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.745 [2024-11-20 12:52:13.234931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:47.745 [2024-11-20 12:52:13.234941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:47.745 [2024-11-20 12:52:13.234946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:47.745 [2024-11-20 12:52:13.234952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:47.745 [2024-11-20 12:52:13.234957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:47.745 [2024-11-20 12:52:13.234963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:47.745 [2024-11-20 12:52:13.234971] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:47.745 [2024-11-20 12:52:13.234980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:47.745 [2024-11-20 12:52:13.234987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:47.745 [2024-11-20 12:52:13.234993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:47.745 [2024-11-20 12:52:13.234999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:47.745 [2024-11-20 12:52:13.235005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:47.745 [2024-11-20 12:52:13.235011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:47.745 [2024-11-20 12:52:13.235018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:47.745 [2024-11-20 12:52:13.235023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:47.745 [2024-11-20 12:52:13.235031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:47.745 [2024-11-20 12:52:13.235037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:47.745 [2024-11-20 12:52:13.235050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:47.745 [2024-11-20 12:52:13.235056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:47.745 [2024-11-20 12:52:13.235063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:47.745 [2024-11-20 12:52:13.235068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:47.745 [2024-11-20 12:52:13.235076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:47.745 [2024-11-20 12:52:13.235081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:47.745 [2024-11-20 12:52:13.235093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:47.745 [2024-11-20 12:52:13.235099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:47.745 [2024-11-20 12:52:13.235106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:47.745 [2024-11-20 12:52:13.235112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:47.745 [2024-11-20 12:52:13.235119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:47.745 [2024-11-20 12:52:13.235125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.745 [2024-11-20 12:52:13.235133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:47.745 [2024-11-20 12:52:13.235139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.961 ms 00:19:47.745 [2024-11-20 12:52:13.235145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.745 [2024-11-20 12:52:13.235212] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:47.745 [2024-11-20 12:52:13.235222] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:50.307 [2024-11-20 12:52:15.461106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.461537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:50.307 [2024-11-20 12:52:15.461607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2225.879 ms 00:19:50.307 [2024-11-20 12:52:15.461655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.486404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.486628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.307 [2024-11-20 12:52:15.486700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.456 ms 00:19:50.307 [2024-11-20 12:52:15.486768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.486924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.487046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.307 [2024-11-20 12:52:15.487122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:50.307 [2024-11-20 12:52:15.487227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.531172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.531541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.307 [2024-11-20 12:52:15.531842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.865 ms 00:19:50.307 [2024-11-20 12:52:15.532062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.532386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.532598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.307 [2024-11-20 12:52:15.532826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.307 [2024-11-20 12:52:15.532949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.533485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.533716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.307 [2024-11-20 12:52:15.533907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:50.307 [2024-11-20 12:52:15.534031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.534313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.534499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.307 [2024-11-20 12:52:15.534699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:50.307 [2024-11-20 12:52:15.534896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.549032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.549189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.307 [2024-11-20 12:52:15.549205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.920 ms 00:19:50.307 [2024-11-20 12:52:15.549214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.560473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.307 [2024-11-20 12:52:15.574191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.574222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.307 [2024-11-20 12:52:15.574235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.875 ms 00:19:50.307 [2024-11-20 12:52:15.574243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.644315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.644364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:50.307 [2024-11-20 12:52:15.644380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.008 ms 00:19:50.307 [2024-11-20 12:52:15.644389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.644601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.644613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.307 [2024-11-20 12:52:15.644626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:50.307 [2024-11-20 12:52:15.644633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.667245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.667380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:50.307 [2024-11-20 12:52:15.667401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.577 ms 00:19:50.307 [2024-11-20 12:52:15.667409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.689443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.689474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:50.307 [2024-11-20 12:52:15.689487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.973 ms 00:19:50.307 [2024-11-20 12:52:15.689493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.690102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.690120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.307 [2024-11-20 12:52:15.690131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:19:50.307 [2024-11-20 12:52:15.690138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.756625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.756658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:50.307 [2024-11-20 12:52:15.756676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.455 ms 00:19:50.307 [2024-11-20 12:52:15.756684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.780451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.780571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:50.307 [2024-11-20 12:52:15.780592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.683 ms 00:19:50.307 [2024-11-20 12:52:15.780600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.307 [2024-11-20 12:52:15.803154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.307 [2024-11-20 12:52:15.803265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:50.307 [2024-11-20 12:52:15.803282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.507 ms 00:19:50.307 [2024-11-20 12:52:15.803290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.572 [2024-11-20 12:52:15.826665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.572 [2024-11-20 12:52:15.826792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.572 [2024-11-20 12:52:15.826811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.319 ms 00:19:50.572 [2024-11-20 12:52:15.826830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.572 [2024-11-20 12:52:15.826878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.572 [2024-11-20 12:52:15.826889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.572 [2024-11-20 12:52:15.826901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.572 [2024-11-20 12:52:15.826908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.572 [2024-11-20 12:52:15.826978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.572 [2024-11-20 12:52:15.826986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.572 [2024-11-20 12:52:15.826996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:50.572 [2024-11-20 12:52:15.827004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.572 [2024-11-20 12:52:15.827781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.572 { 00:19:50.572 "name": "ftl0", 00:19:50.572 "uuid": "200ffff6-6870-4bdc-85eb-29aedea7a1b0" 00:19:50.572 } 00:19:50.572 [2024-11-20 12:52:15.830786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2608.971 ms, result 0 00:19:50.572 [2024-11-20 12:52:15.831448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.572 12:52:15 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:50.572 12:52:15 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:50.572 12:52:15 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:50.572 12:52:15 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:50.572 12:52:15 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:50.572 12:52:15 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:50.572 12:52:15 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:50.572 12:52:16 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:50.834 [ 00:19:50.834 { 00:19:50.834 "name": "ftl0", 00:19:50.834 "aliases": [ 00:19:50.834 "200ffff6-6870-4bdc-85eb-29aedea7a1b0" 00:19:50.834 ], 00:19:50.834 "product_name": "FTL disk", 00:19:50.834 "block_size": 4096, 00:19:50.834 "num_blocks": 23592960, 00:19:50.834 "uuid": "200ffff6-6870-4bdc-85eb-29aedea7a1b0", 00:19:50.834 "assigned_rate_limits": { 00:19:50.834 "rw_ios_per_sec": 0, 00:19:50.834 "rw_mbytes_per_sec": 0, 00:19:50.834 "r_mbytes_per_sec": 0, 00:19:50.834 "w_mbytes_per_sec": 0 00:19:50.834 }, 00:19:50.834 "claimed": false, 00:19:50.834 "zoned": false, 00:19:50.834 "supported_io_types": { 00:19:50.834 "read": true, 00:19:50.834 "write": true, 00:19:50.834 "unmap": true, 00:19:50.834 "flush": true, 00:19:50.834 "reset": false, 00:19:50.834 "nvme_admin": false, 00:19:50.834 "nvme_io": false, 00:19:50.834 "nvme_io_md": false, 00:19:50.834 "write_zeroes": true, 00:19:50.834 "zcopy": false, 00:19:50.834 "get_zone_info": false, 00:19:50.834 "zone_management": false, 00:19:50.834 "zone_append": false, 00:19:50.834 "compare": false, 00:19:50.834 "compare_and_write": false, 00:19:50.834 "abort": false, 00:19:50.834 "seek_hole": false, 00:19:50.834 "seek_data": false, 00:19:50.834 "copy": false, 00:19:50.834 "nvme_iov_md": false 00:19:50.834 }, 00:19:50.834 "driver_specific": { 00:19:50.834 "ftl": { 00:19:50.834 "base_bdev": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:50.834 "cache": "nvc0n1p0" 00:19:50.834 } 00:19:50.834 } 00:19:50.834 } 00:19:50.834 ] 00:19:50.834 12:52:16 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:50.834 12:52:16 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:50.834 12:52:16 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:51.095 12:52:16 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:51.095 12:52:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:51.095 12:52:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:51.095 { 00:19:51.095 "name": "ftl0", 00:19:51.095 "aliases": [ 00:19:51.095 "200ffff6-6870-4bdc-85eb-29aedea7a1b0" 00:19:51.095 ], 00:19:51.095 "product_name": "FTL disk", 00:19:51.095 "block_size": 4096, 00:19:51.095 "num_blocks": 23592960, 00:19:51.095 "uuid": "200ffff6-6870-4bdc-85eb-29aedea7a1b0", 00:19:51.095 "assigned_rate_limits": { 00:19:51.095 "rw_ios_per_sec": 0, 00:19:51.095 "rw_mbytes_per_sec": 0, 00:19:51.095 "r_mbytes_per_sec": 0, 00:19:51.095 "w_mbytes_per_sec": 0 00:19:51.095 }, 00:19:51.095 "claimed": false, 00:19:51.095 "zoned": false, 00:19:51.095 "supported_io_types": { 00:19:51.095 "read": true, 00:19:51.095 "write": true, 00:19:51.095 "unmap": true, 00:19:51.095 "flush": true, 00:19:51.095 "reset": false, 00:19:51.095 "nvme_admin": false, 00:19:51.095 "nvme_io": false, 00:19:51.095 "nvme_io_md": false, 00:19:51.095 "write_zeroes": true, 00:19:51.095 "zcopy": false, 00:19:51.095 "get_zone_info": false, 00:19:51.095 "zone_management": false, 00:19:51.095 "zone_append": false, 00:19:51.095 "compare": false, 00:19:51.095 "compare_and_write": false, 00:19:51.095 "abort": false, 00:19:51.095 "seek_hole": false, 00:19:51.095 "seek_data": false, 00:19:51.095 "copy": false, 00:19:51.095 "nvme_iov_md": false 00:19:51.095 }, 00:19:51.095 "driver_specific": { 00:19:51.095 "ftl": { 00:19:51.095 "base_bdev": "65df5fd2-4de9-4289-ae82-6eaf997111ec", 00:19:51.095 "cache": "nvc0n1p0" 00:19:51.095 } 00:19:51.095 } 00:19:51.095 } 00:19:51.095 ]' 00:19:51.095 12:52:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:51.357 12:52:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:51.357 12:52:16 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:51.357 [2024-11-20 12:52:16.782519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.782663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.357 [2024-11-20 12:52:16.782683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.357 [2024-11-20 12:52:16.782696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.782732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.357 [2024-11-20 12:52:16.785346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.785374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.357 [2024-11-20 12:52:16.785392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:19:51.357 [2024-11-20 12:52:16.785400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.785885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.785905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.357 [2024-11-20 12:52:16.785915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:19:51.357 [2024-11-20 12:52:16.785922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.789565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.789587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.357 [2024-11-20 12:52:16.789599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.616 ms 00:19:51.357 [2024-11-20 12:52:16.789608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.796623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.796736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.357 [2024-11-20 12:52:16.796763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:19:51.357 [2024-11-20 12:52:16.796772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.820205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.820313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.357 [2024-11-20 12:52:16.820334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.356 ms 00:19:51.357 [2024-11-20 12:52:16.820341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.834638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.834670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.357 [2024-11-20 12:52:16.834684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.244 ms 00:19:51.357 [2024-11-20 12:52:16.834694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.834912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.834923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.357 [2024-11-20 12:52:16.834933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:51.357 [2024-11-20 12:52:16.834941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.357 [2024-11-20 12:52:16.857303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.357 [2024-11-20 12:52:16.857332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.357 [2024-11-20 12:52:16.857344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.337 ms 00:19:51.357 [2024-11-20 12:52:16.857351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.620 [2024-11-20 12:52:16.879642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.620 [2024-11-20 12:52:16.879671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.620 [2024-11-20 12:52:16.879684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.235 ms 00:19:51.620 [2024-11-20 12:52:16.879691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.620 [2024-11-20 12:52:16.901588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.620 [2024-11-20 12:52:16.901705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.620 [2024-11-20 12:52:16.901723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.830 ms 00:19:51.620 [2024-11-20 12:52:16.901730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.620 [2024-11-20 12:52:16.923511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.620 [2024-11-20 12:52:16.923539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.620 [2024-11-20 12:52:16.923551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.675 ms 00:19:51.620 [2024-11-20 12:52:16.923558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.620 [2024-11-20 12:52:16.923611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.620 [2024-11-20 12:52:16.923624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.620 [2024-11-20 12:52:16.923695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.923995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.621 [2024-11-20 12:52:16.924472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.622 [2024-11-20 12:52:16.924479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.622 [2024-11-20 12:52:16.924488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.622 [2024-11-20 12:52:16.924503] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.622 [2024-11-20 12:52:16.924514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:19:51.622 [2024-11-20 12:52:16.924521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.622 [2024-11-20 12:52:16.924530] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.622 [2024-11-20 12:52:16.924536] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.622 [2024-11-20 12:52:16.924546] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.622 [2024-11-20 12:52:16.924554] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.622 [2024-11-20 12:52:16.924563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.622 [2024-11-20 12:52:16.924570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.622 [2024-11-20 12:52:16.924578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.622 [2024-11-20 12:52:16.924584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.622 [2024-11-20 12:52:16.924592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.622 [2024-11-20 12:52:16.924600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.622 [2024-11-20 12:52:16.924609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:19:51.622 [2024-11-20 12:52:16.924616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:16.936992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.622 [2024-11-20 12:52:16.937020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.622 [2024-11-20 12:52:16.937036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.344 ms 00:19:51.622 [2024-11-20 12:52:16.937044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:16.937408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.622 [2024-11-20 12:52:16.937418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.622 [2024-11-20 12:52:16.937427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:51.622 [2024-11-20 12:52:16.937434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:16.980515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:16.980549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.622 [2024-11-20 12:52:16.980560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:16.980567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:16.980661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:16.980670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.622 [2024-11-20 12:52:16.980680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:16.980687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:16.980761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:16.980770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.622 [2024-11-20 12:52:16.980784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:16.980791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:16.980817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:16.980825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.622 [2024-11-20 12:52:16.980833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:16.980840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.052799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.052837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.622 [2024-11-20 12:52:17.052847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.052854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.100105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.622 [2024-11-20 12:52:17.100115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.100121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.100202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.622 [2024-11-20 12:52:17.100221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.100230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.100277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.622 [2024-11-20 12:52:17.100284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.100290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.100387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.622 [2024-11-20 12:52:17.100394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.100399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.100450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.622 [2024-11-20 12:52:17.100457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.100463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.100509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.622 [2024-11-20 12:52:17.100518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.100523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.622 [2024-11-20 12:52:17.100583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.622 [2024-11-20 12:52:17.100590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.622 [2024-11-20 12:52:17.100596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.622 [2024-11-20 12:52:17.100731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 318.203 ms, result 0 00:19:51.622 true 00:19:51.622 12:52:17 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76397 00:19:51.622 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76397 ']' 00:19:51.622 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76397 00:19:51.622 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:51.622 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.622 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76397 00:19:51.884 killing process with pid 76397 00:19:51.884 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.884 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.884 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76397' 00:19:51.884 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76397 00:19:51.884 12:52:17 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76397 00:19:58.474 12:52:22 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:58.474 65536+0 records in 00:19:58.474 65536+0 records out 00:19:58.474 268435456 bytes (268 MB, 256 MiB) copied, 0.793415 s, 338 MB/s 00:19:58.474 12:52:23 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:58.474 [2024-11-20 12:52:23.610675] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:19:58.474 [2024-11-20 12:52:23.610890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76579 ] 00:19:58.474 [2024-11-20 12:52:23.765730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.474 [2024-11-20 12:52:23.863675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.736 [2024-11-20 12:52:24.137093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.736 [2024-11-20 12:52:24.137179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.999 [2024-11-20 12:52:24.298832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.299078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.999 [2024-11-20 12:52:24.299104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:58.999 [2024-11-20 12:52:24.299114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.302171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.302363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.999 [2024-11-20 12:52:24.302385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:19:58.999 [2024-11-20 12:52:24.302393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.302633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.999 [2024-11-20 12:52:24.303416] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.999 [2024-11-20 12:52:24.303454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.303463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.999 [2024-11-20 12:52:24.303473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:19:58.999 [2024-11-20 12:52:24.303482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.305366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:58.999 [2024-11-20 12:52:24.319717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.319783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:58.999 [2024-11-20 12:52:24.319798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.353 ms 00:19:58.999 [2024-11-20 12:52:24.319806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.319933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.319946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:58.999 [2024-11-20 12:52:24.319957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:58.999 [2024-11-20 12:52:24.319965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.328455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.328504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.999 [2024-11-20 12:52:24.328515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.442 ms 00:19:58.999 [2024-11-20 12:52:24.328523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.328636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.328647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.999 [2024-11-20 12:52:24.328657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:58.999 [2024-11-20 12:52:24.328664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.328693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.328706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.999 [2024-11-20 12:52:24.328715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:58.999 [2024-11-20 12:52:24.328722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.328781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.999 [2024-11-20 12:52:24.332940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.332983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.999 [2024-11-20 12:52:24.332995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.165 ms 00:19:58.999 [2024-11-20 12:52:24.333003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.333081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.333092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.999 [2024-11-20 12:52:24.333102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:58.999 [2024-11-20 12:52:24.333111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.333133] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:58.999 [2024-11-20 12:52:24.333156] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:58.999 [2024-11-20 12:52:24.333193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:58.999 [2024-11-20 12:52:24.333209] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:58.999 [2024-11-20 12:52:24.333315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.999 [2024-11-20 12:52:24.333327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.999 [2024-11-20 12:52:24.333337] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.999 [2024-11-20 12:52:24.333347] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.999 [2024-11-20 12:52:24.333360] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.999 [2024-11-20 12:52:24.333368] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.999 [2024-11-20 12:52:24.333375] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.999 [2024-11-20 12:52:24.333389] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.999 [2024-11-20 12:52:24.333397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.999 [2024-11-20 12:52:24.333405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.333413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.999 [2024-11-20 12:52:24.333422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:58.999 [2024-11-20 12:52:24.333429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.333518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.999 [2024-11-20 12:52:24.333527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.999 [2024-11-20 12:52:24.333538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:58.999 [2024-11-20 12:52:24.333545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.999 [2024-11-20 12:52:24.333648] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.999 [2024-11-20 12:52:24.333659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.999 [2024-11-20 12:52:24.333669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.999 [2024-11-20 12:52:24.333676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.999 [2024-11-20 12:52:24.333685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.999 [2024-11-20 12:52:24.333691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.999 [2024-11-20 12:52:24.333698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.999 [2024-11-20 12:52:24.333707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.999 [2024-11-20 12:52:24.333715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.000 [2024-11-20 12:52:24.333729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.000 [2024-11-20 12:52:24.333768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:59.000 [2024-11-20 12:52:24.333776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.000 [2024-11-20 12:52:24.333791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.000 [2024-11-20 12:52:24.333798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:59.000 [2024-11-20 12:52:24.333806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.000 [2024-11-20 12:52:24.333826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:59.000 [2024-11-20 12:52:24.333834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.000 [2024-11-20 12:52:24.333849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.000 [2024-11-20 12:52:24.333864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.000 [2024-11-20 12:52:24.333871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.000 [2024-11-20 12:52:24.333885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.000 [2024-11-20 12:52:24.333892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.000 [2024-11-20 12:52:24.333906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.000 [2024-11-20 12:52:24.333914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.000 [2024-11-20 12:52:24.333927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.000 [2024-11-20 12:52:24.333934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.000 [2024-11-20 12:52:24.333948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.000 [2024-11-20 12:52:24.333955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:59.000 [2024-11-20 12:52:24.333961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.000 [2024-11-20 12:52:24.333967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:59.000 [2024-11-20 12:52:24.333974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:59.000 [2024-11-20 12:52:24.333980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.000 [2024-11-20 12:52:24.333987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:59.000 [2024-11-20 12:52:24.333993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:59.000 [2024-11-20 12:52:24.334000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.000 [2024-11-20 12:52:24.334007] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.000 [2024-11-20 12:52:24.334015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.000 [2024-11-20 12:52:24.334022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.000 [2024-11-20 12:52:24.334032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.000 [2024-11-20 12:52:24.334041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.000 [2024-11-20 12:52:24.334048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.000 [2024-11-20 12:52:24.334056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.000 [2024-11-20 12:52:24.334064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.000 [2024-11-20 12:52:24.334070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.000 [2024-11-20 12:52:24.334077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.000 [2024-11-20 12:52:24.334086] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.000 [2024-11-20 12:52:24.334095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.000 [2024-11-20 12:52:24.334104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:59.000 [2024-11-20 12:52:24.334111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:59.000 [2024-11-20 12:52:24.334118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:59.000 [2024-11-20 12:52:24.334126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:59.000 [2024-11-20 12:52:24.334134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:59.000 [2024-11-20 12:52:24.334141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:59.000 [2024-11-20 12:52:24.334149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:59.000 [2024-11-20 12:52:24.334156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:59.000 [2024-11-20 12:52:24.334163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:59.000 [2024-11-20 12:52:24.334171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:59.000 [2024-11-20 12:52:24.334178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:59.000 [2024-11-20 12:52:24.334185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:59.000 [2024-11-20 12:52:24.334191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:59.000 [2024-11-20 12:52:24.334199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:59.000 [2024-11-20 12:52:24.334206] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.000 [2024-11-20 12:52:24.334215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.000 [2024-11-20 12:52:24.334223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.000 [2024-11-20 12:52:24.334230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.000 [2024-11-20 12:52:24.334238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.000 [2024-11-20 12:52:24.334245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.000 [2024-11-20 12:52:24.334253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.334261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.000 [2024-11-20 12:52:24.334271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:59.000 [2024-11-20 12:52:24.334278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.366966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.367167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.000 [2024-11-20 12:52:24.367854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.633 ms 00:19:59.000 [2024-11-20 12:52:24.367986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.368180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.368293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.000 [2024-11-20 12:52:24.368322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:59.000 [2024-11-20 12:52:24.368372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.416885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.417104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.000 [2024-11-20 12:52:24.417523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.463 ms 00:19:59.000 [2024-11-20 12:52:24.417592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.418167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.418290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.000 [2024-11-20 12:52:24.418347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:59.000 [2024-11-20 12:52:24.418373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.418954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.419127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.000 [2024-11-20 12:52:24.419159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:59.000 [2024-11-20 12:52:24.419190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.419364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.419391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.000 [2024-11-20 12:52:24.419412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:59.000 [2024-11-20 12:52:24.419431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.436087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.000 [2024-11-20 12:52:24.436258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.000 [2024-11-20 12:52:24.436322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.560 ms 00:19:59.000 [2024-11-20 12:52:24.436346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.000 [2024-11-20 12:52:24.451153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:59.000 [2024-11-20 12:52:24.451352] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:59.001 [2024-11-20 12:52:24.451419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.001 [2024-11-20 12:52:24.451441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:59.001 [2024-11-20 12:52:24.451464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.937 ms 00:19:59.001 [2024-11-20 12:52:24.451482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.001 [2024-11-20 12:52:24.478545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.001 [2024-11-20 12:52:24.478758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:59.001 [2024-11-20 12:52:24.478854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.467 ms 00:19:59.001 [2024-11-20 12:52:24.478881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.001 [2024-11-20 12:52:24.491929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.001 [2024-11-20 12:52:24.492116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:59.001 [2024-11-20 12:52:24.492178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:19:59.001 [2024-11-20 12:52:24.492199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.001 [2024-11-20 12:52:24.505493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.001 [2024-11-20 12:52:24.505669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:59.001 [2024-11-20 12:52:24.505729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.197 ms 00:19:59.001 [2024-11-20 12:52:24.505773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.001 [2024-11-20 12:52:24.506460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.001 [2024-11-20 12:52:24.506529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.001 [2024-11-20 12:52:24.506757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:19:59.001 [2024-11-20 12:52:24.507136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.262 [2024-11-20 12:52:24.575140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.262 [2024-11-20 12:52:24.575215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:59.262 [2024-11-20 12:52:24.575234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.926 ms 00:19:59.262 [2024-11-20 12:52:24.575244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.262 [2024-11-20 12:52:24.586857] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:59.262 [2024-11-20 12:52:24.606300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.262 [2024-11-20 12:52:24.606528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.262 [2024-11-20 12:52:24.606550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.946 ms 00:19:59.262 [2024-11-20 12:52:24.606559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.262 [2024-11-20 12:52:24.606659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.262 [2024-11-20 12:52:24.606675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:59.262 [2024-11-20 12:52:24.606686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:59.262 [2024-11-20 12:52:24.606695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.262 [2024-11-20 12:52:24.606795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.262 [2024-11-20 12:52:24.606806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.262 [2024-11-20 12:52:24.606816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:59.262 [2024-11-20 12:52:24.606824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.262 [2024-11-20 12:52:24.606855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.262 [2024-11-20 12:52:24.606864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:59.262 [2024-11-20 12:52:24.606877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:59.262 [2024-11-20 12:52:24.606885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.262 [2024-11-20 12:52:24.606922] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:59.262 [2024-11-20 12:52:24.606933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.262 [2024-11-20 12:52:24.606941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:59.262 [2024-11-20 12:52:24.606950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:59.263 [2024-11-20 12:52:24.606958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.263 [2024-11-20 12:52:24.633150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.263 [2024-11-20 12:52:24.633353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:59.263 [2024-11-20 12:52:24.633377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.164 ms 00:19:59.263 [2024-11-20 12:52:24.633388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.263 [2024-11-20 12:52:24.633505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.263 [2024-11-20 12:52:24.633517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:59.263 [2024-11-20 12:52:24.633527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:59.263 [2024-11-20 12:52:24.633536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.263 [2024-11-20 12:52:24.635261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.263 [2024-11-20 12:52:24.638852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.096 ms, result 0 00:19:59.263 [2024-11-20 12:52:24.640280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.263 [2024-11-20 12:52:24.654205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.209  [2024-11-20T12:52:26.671Z] Copying: 17/256 [MB] (17 MBps) [2024-11-20T12:52:28.061Z] Copying: 32/256 [MB] (15 MBps) [2024-11-20T12:52:29.005Z] Copying: 72/256 [MB] (39 MBps) [2024-11-20T12:52:29.949Z] Copying: 108/256 [MB] (36 MBps) [2024-11-20T12:52:30.892Z] Copying: 129/256 [MB] (20 MBps) [2024-11-20T12:52:31.835Z] Copying: 171/256 [MB] (42 MBps) [2024-11-20T12:52:32.780Z] Copying: 187/256 [MB] (15 MBps) [2024-11-20T12:52:33.834Z] Copying: 200/256 [MB] (13 MBps) [2024-11-20T12:52:34.779Z] Copying: 216/256 [MB] (15 MBps) [2024-11-20T12:52:35.723Z] Copying: 233/256 [MB] (16 MBps) [2024-11-20T12:52:35.985Z] Copying: 250/256 [MB] (17 MBps) [2024-11-20T12:52:35.985Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-20 12:52:35.909953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.466 [2024-11-20 12:52:35.920439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.466 [2024-11-20 12:52:35.920491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:10.466 [2024-11-20 12:52:35.920507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.466 [2024-11-20 12:52:35.920516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.466 [2024-11-20 12:52:35.920540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:10.466 [2024-11-20 12:52:35.923619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.466 [2024-11-20 12:52:35.923668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:10.466 [2024-11-20 12:52:35.923680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.063 ms 00:20:10.466 [2024-11-20 12:52:35.923690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.466 [2024-11-20 12:52:35.926568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.466 [2024-11-20 12:52:35.926616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:10.466 [2024-11-20 12:52:35.926628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:20:10.466 [2024-11-20 12:52:35.926636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.466 [2024-11-20 12:52:35.935848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.466 [2024-11-20 12:52:35.935898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:10.467 [2024-11-20 12:52:35.935917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.193 ms 00:20:10.467 [2024-11-20 12:52:35.935925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.467 [2024-11-20 12:52:35.942937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.467 [2024-11-20 12:52:35.943128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:10.467 [2024-11-20 12:52:35.943149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.966 ms 00:20:10.467 [2024-11-20 12:52:35.943159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.467 [2024-11-20 12:52:35.968386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.467 [2024-11-20 12:52:35.968435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:10.467 [2024-11-20 12:52:35.968448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.155 ms 00:20:10.467 [2024-11-20 12:52:35.968455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.728 [2024-11-20 12:52:35.984088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.728 [2024-11-20 12:52:35.984136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:10.728 [2024-11-20 12:52:35.984158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.582 ms 00:20:10.728 [2024-11-20 12:52:35.984169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.728 [2024-11-20 12:52:35.984320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.728 [2024-11-20 12:52:35.984332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:10.728 [2024-11-20 12:52:35.984341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:10.728 [2024-11-20 12:52:35.984349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.728 [2024-11-20 12:52:36.010205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.728 [2024-11-20 12:52:36.010397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:10.728 [2024-11-20 12:52:36.010419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.840 ms 00:20:10.728 [2024-11-20 12:52:36.010427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.728 [2024-11-20 12:52:36.035831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.728 [2024-11-20 12:52:36.035877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:10.728 [2024-11-20 12:52:36.035888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.311 ms 00:20:10.728 [2024-11-20 12:52:36.035895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.728 [2024-11-20 12:52:36.060232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.728 [2024-11-20 12:52:36.060276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:10.728 [2024-11-20 12:52:36.060288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.287 ms 00:20:10.728 [2024-11-20 12:52:36.060295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.728 [2024-11-20 12:52:36.084948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.728 [2024-11-20 12:52:36.084994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:10.728 [2024-11-20 12:52:36.085006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.573 ms 00:20:10.728 [2024-11-20 12:52:36.085014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.728 [2024-11-20 12:52:36.085061] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:10.728 [2024-11-20 12:52:36.085084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:10.728 [2024-11-20 12:52:36.085201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:10.729 [2024-11-20 12:52:36.085883] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:10.729 [2024-11-20 12:52:36.085892] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:20:10.729 [2024-11-20 12:52:36.085901] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:10.729 [2024-11-20 12:52:36.085908] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:10.729 [2024-11-20 12:52:36.085915] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:10.729 [2024-11-20 12:52:36.085923] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:10.730 [2024-11-20 12:52:36.085930] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:10.730 [2024-11-20 12:52:36.085942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:10.730 [2024-11-20 12:52:36.085950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:10.730 [2024-11-20 12:52:36.085957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:10.730 [2024-11-20 12:52:36.085964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:10.730 [2024-11-20 12:52:36.085972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.730 [2024-11-20 12:52:36.085979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:10.730 [2024-11-20 12:52:36.085991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:20:10.730 [2024-11-20 12:52:36.085999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.730 [2024-11-20 12:52:36.099766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.730 [2024-11-20 12:52:36.099950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:10.730 [2024-11-20 12:52:36.099968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.735 ms 00:20:10.730 [2024-11-20 12:52:36.099976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.730 [2024-11-20 12:52:36.100375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.730 [2024-11-20 12:52:36.100395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:10.730 [2024-11-20 12:52:36.100405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:20:10.730 [2024-11-20 12:52:36.100413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.730 [2024-11-20 12:52:36.139457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.730 [2024-11-20 12:52:36.139660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.730 [2024-11-20 12:52:36.139681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.730 [2024-11-20 12:52:36.139691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.730 [2024-11-20 12:52:36.139822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.730 [2024-11-20 12:52:36.139837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.730 [2024-11-20 12:52:36.139847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.730 [2024-11-20 12:52:36.139855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.730 [2024-11-20 12:52:36.139908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.730 [2024-11-20 12:52:36.139918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.730 [2024-11-20 12:52:36.139927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.730 [2024-11-20 12:52:36.139935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.730 [2024-11-20 12:52:36.139952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.730 [2024-11-20 12:52:36.139961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.730 [2024-11-20 12:52:36.139972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.730 [2024-11-20 12:52:36.139980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.730 [2024-11-20 12:52:36.223061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.730 [2024-11-20 12:52:36.223118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.730 [2024-11-20 12:52:36.223132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.730 [2024-11-20 12:52:36.223141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.291866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.992 [2024-11-20 12:52:36.291923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.992 [2024-11-20 12:52:36.291941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.992 [2024-11-20 12:52:36.291951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.292030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.992 [2024-11-20 12:52:36.292041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.992 [2024-11-20 12:52:36.292050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.992 [2024-11-20 12:52:36.292058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.292089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.992 [2024-11-20 12:52:36.292099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.992 [2024-11-20 12:52:36.292107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.992 [2024-11-20 12:52:36.292120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.292223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.992 [2024-11-20 12:52:36.292234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.992 [2024-11-20 12:52:36.292243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.992 [2024-11-20 12:52:36.292251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.292286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.992 [2024-11-20 12:52:36.292297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:10.992 [2024-11-20 12:52:36.292305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.992 [2024-11-20 12:52:36.292313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.292361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.992 [2024-11-20 12:52:36.292372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.992 [2024-11-20 12:52:36.292381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.992 [2024-11-20 12:52:36.292389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.292438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.992 [2024-11-20 12:52:36.292450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.992 [2024-11-20 12:52:36.292459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.992 [2024-11-20 12:52:36.292470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.992 [2024-11-20 12:52:36.292627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.176 ms, result 0 00:20:11.938 00:20:11.938 00:20:11.938 12:52:37 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76720 00:20:11.938 12:52:37 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76720 00:20:11.938 12:52:37 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:11.938 12:52:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76720 ']' 00:20:11.938 12:52:37 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.938 12:52:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.938 12:52:37 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.938 12:52:37 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.938 12:52:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:11.938 [2024-11-20 12:52:37.425541] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:20:11.938 [2024-11-20 12:52:37.425683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76720 ] 00:20:12.200 [2024-11-20 12:52:37.590641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.200 [2024-11-20 12:52:37.714611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.141 12:52:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.141 12:52:38 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:13.141 12:52:38 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:13.141 [2024-11-20 12:52:38.603326] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.141 [2024-11-20 12:52:38.603618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.404 [2024-11-20 12:52:38.770323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.770382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:13.404 [2024-11-20 12:52:38.770400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.404 [2024-11-20 12:52:38.770409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.773840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.773896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.404 [2024-11-20 12:52:38.773911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:20:13.404 [2024-11-20 12:52:38.773920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.774065] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:13.404 [2024-11-20 12:52:38.774849] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:13.404 [2024-11-20 12:52:38.774883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.774893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.404 [2024-11-20 12:52:38.774905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:20:13.404 [2024-11-20 12:52:38.774912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.776639] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:13.404 [2024-11-20 12:52:38.791445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.791505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:13.404 [2024-11-20 12:52:38.791545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.814 ms 00:20:13.404 [2024-11-20 12:52:38.791557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.791676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.791692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:13.404 [2024-11-20 12:52:38.791702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:13.404 [2024-11-20 12:52:38.791711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.799895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.799947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.404 [2024-11-20 12:52:38.799959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.100 ms 00:20:13.404 [2024-11-20 12:52:38.799968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.800088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.800102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.404 [2024-11-20 12:52:38.800111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:13.404 [2024-11-20 12:52:38.800120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.800153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.800163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:13.404 [2024-11-20 12:52:38.800171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:13.404 [2024-11-20 12:52:38.800181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.800205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:13.404 [2024-11-20 12:52:38.804211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.804252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.404 [2024-11-20 12:52:38.804286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.009 ms 00:20:13.404 [2024-11-20 12:52:38.804294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.804373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.804384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:13.404 [2024-11-20 12:52:38.804395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:13.404 [2024-11-20 12:52:38.804405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.804428] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:13.404 [2024-11-20 12:52:38.804448] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:13.404 [2024-11-20 12:52:38.804493] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:13.404 [2024-11-20 12:52:38.804509] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:13.404 [2024-11-20 12:52:38.804618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:13.404 [2024-11-20 12:52:38.804630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:13.404 [2024-11-20 12:52:38.804645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:13.404 [2024-11-20 12:52:38.804658] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:13.404 [2024-11-20 12:52:38.804669] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:13.404 [2024-11-20 12:52:38.804677] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:13.404 [2024-11-20 12:52:38.804687] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:13.404 [2024-11-20 12:52:38.804695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:13.404 [2024-11-20 12:52:38.804708] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:13.404 [2024-11-20 12:52:38.804716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.804725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:13.404 [2024-11-20 12:52:38.804734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:20:13.404 [2024-11-20 12:52:38.804769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.804859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.404 [2024-11-20 12:52:38.804870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:13.404 [2024-11-20 12:52:38.804878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:13.404 [2024-11-20 12:52:38.804888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.404 [2024-11-20 12:52:38.804990] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:13.404 [2024-11-20 12:52:38.805002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:13.404 [2024-11-20 12:52:38.805011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.404 [2024-11-20 12:52:38.805021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:13.404 [2024-11-20 12:52:38.805038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:13.404 [2024-11-20 12:52:38.805058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:13.404 [2024-11-20 12:52:38.805070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.404 [2024-11-20 12:52:38.805086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:13.404 [2024-11-20 12:52:38.805094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:13.404 [2024-11-20 12:52:38.805100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.404 [2024-11-20 12:52:38.805109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:13.404 [2024-11-20 12:52:38.805116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:13.404 [2024-11-20 12:52:38.805126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:13.404 [2024-11-20 12:52:38.805143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:13.404 [2024-11-20 12:52:38.805150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:13.404 [2024-11-20 12:52:38.805174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.404 [2024-11-20 12:52:38.805188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:13.404 [2024-11-20 12:52:38.805199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.404 [2024-11-20 12:52:38.805214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:13.404 [2024-11-20 12:52:38.805221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.404 [2024-11-20 12:52:38.805236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:13.404 [2024-11-20 12:52:38.805244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.404 [2024-11-20 12:52:38.805259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:13.404 [2024-11-20 12:52:38.805266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:13.404 [2024-11-20 12:52:38.805275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.404 [2024-11-20 12:52:38.805282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:13.404 [2024-11-20 12:52:38.805291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:13.404 [2024-11-20 12:52:38.805297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.404 [2024-11-20 12:52:38.805305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:13.405 [2024-11-20 12:52:38.805312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:13.405 [2024-11-20 12:52:38.805322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.405 [2024-11-20 12:52:38.805329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:13.405 [2024-11-20 12:52:38.805337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:13.405 [2024-11-20 12:52:38.805344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.405 [2024-11-20 12:52:38.805352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:13.405 [2024-11-20 12:52:38.805360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:13.405 [2024-11-20 12:52:38.805372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.405 [2024-11-20 12:52:38.805379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.405 [2024-11-20 12:52:38.805389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:13.405 [2024-11-20 12:52:38.805397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:13.405 [2024-11-20 12:52:38.805406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:13.405 [2024-11-20 12:52:38.805414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:13.405 [2024-11-20 12:52:38.805422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:13.405 [2024-11-20 12:52:38.805429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:13.405 [2024-11-20 12:52:38.805438] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:13.405 [2024-11-20 12:52:38.805448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.405 [2024-11-20 12:52:38.805461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:13.405 [2024-11-20 12:52:38.805468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:13.405 [2024-11-20 12:52:38.805478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:13.405 [2024-11-20 12:52:38.805486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:13.405 [2024-11-20 12:52:38.805495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:13.405 [2024-11-20 12:52:38.805502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:13.405 [2024-11-20 12:52:38.805512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:13.405 [2024-11-20 12:52:38.805519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:13.405 [2024-11-20 12:52:38.805527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:13.405 [2024-11-20 12:52:38.805534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:13.405 [2024-11-20 12:52:38.805543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:13.405 [2024-11-20 12:52:38.805550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:13.405 [2024-11-20 12:52:38.805560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:13.405 [2024-11-20 12:52:38.805567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:13.405 [2024-11-20 12:52:38.805576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:13.405 [2024-11-20 12:52:38.805584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.405 [2024-11-20 12:52:38.805596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:13.405 [2024-11-20 12:52:38.805604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:13.405 [2024-11-20 12:52:38.805613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:13.405 [2024-11-20 12:52:38.805621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:13.405 [2024-11-20 12:52:38.805630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.805638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:13.405 [2024-11-20 12:52:38.805647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:20:13.405 [2024-11-20 12:52:38.805655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.837799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.837843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.405 [2024-11-20 12:52:38.837857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.080 ms 00:20:13.405 [2024-11-20 12:52:38.837865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.838003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.838013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.405 [2024-11-20 12:52:38.838024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:13.405 [2024-11-20 12:52:38.838033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.873618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.873662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.405 [2024-11-20 12:52:38.873680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.559 ms 00:20:13.405 [2024-11-20 12:52:38.873688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.873809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.873821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.405 [2024-11-20 12:52:38.873833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.405 [2024-11-20 12:52:38.873842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.874398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.874425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.405 [2024-11-20 12:52:38.874441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:20:13.405 [2024-11-20 12:52:38.874449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.874605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.874614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.405 [2024-11-20 12:52:38.874624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:13.405 [2024-11-20 12:52:38.874632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.892800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.892839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.405 [2024-11-20 12:52:38.892852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.142 ms 00:20:13.405 [2024-11-20 12:52:38.892861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.405 [2024-11-20 12:52:38.907202] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:13.405 [2024-11-20 12:52:38.907251] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:13.405 [2024-11-20 12:52:38.907268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.405 [2024-11-20 12:52:38.907276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:13.405 [2024-11-20 12:52:38.907288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.288 ms 00:20:13.405 [2024-11-20 12:52:38.907296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:38.933315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:38.933364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:13.667 [2024-11-20 12:52:38.933379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.919 ms 00:20:13.667 [2024-11-20 12:52:38.933387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:38.946319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:38.946364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:13.667 [2024-11-20 12:52:38.946383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.829 ms 00:20:13.667 [2024-11-20 12:52:38.946390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:38.958936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:38.958979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:13.667 [2024-11-20 12:52:38.958995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.452 ms 00:20:13.667 [2024-11-20 12:52:38.959002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:38.959679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:38.959705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.667 [2024-11-20 12:52:38.959718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:20:13.667 [2024-11-20 12:52:38.959726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.036077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.036147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:13.667 [2024-11-20 12:52:39.036169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.300 ms 00:20:13.667 [2024-11-20 12:52:39.036179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.047221] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:13.667 [2024-11-20 12:52:39.066179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.066432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.667 [2024-11-20 12:52:39.066458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.889 ms 00:20:13.667 [2024-11-20 12:52:39.066469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.066565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.066579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:13.667 [2024-11-20 12:52:39.066589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:13.667 [2024-11-20 12:52:39.066599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.066657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.066668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.667 [2024-11-20 12:52:39.066677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:13.667 [2024-11-20 12:52:39.066688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.066716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.066728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:13.667 [2024-11-20 12:52:39.066773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.667 [2024-11-20 12:52:39.066787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.066824] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:13.667 [2024-11-20 12:52:39.066839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.066847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:13.667 [2024-11-20 12:52:39.066861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:13.667 [2024-11-20 12:52:39.066869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.093258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.093441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:13.667 [2024-11-20 12:52:39.093470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.358 ms 00:20:13.667 [2024-11-20 12:52:39.093478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.093624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.667 [2024-11-20 12:52:39.093637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:13.667 [2024-11-20 12:52:39.093649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:13.667 [2024-11-20 12:52:39.093659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.667 [2024-11-20 12:52:39.094784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.667 [2024-11-20 12:52:39.098381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 324.109 ms, result 0 00:20:13.667 [2024-11-20 12:52:39.099687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.667 Some configs were skipped because the RPC state that can call them passed over. 00:20:13.667 12:52:39 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:13.927 [2024-11-20 12:52:39.346919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.927 [2024-11-20 12:52:39.347130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:13.927 [2024-11-20 12:52:39.347155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:20:13.927 [2024-11-20 12:52:39.347167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.927 [2024-11-20 12:52:39.347214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.009 ms, result 0 00:20:13.927 true 00:20:13.927 12:52:39 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:14.187 [2024-11-20 12:52:39.566441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.187 [2024-11-20 12:52:39.566643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.187 [2024-11-20 12:52:39.566672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.172 ms 00:20:14.187 [2024-11-20 12:52:39.566681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.187 [2024-11-20 12:52:39.566729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.465 ms, result 0 00:20:14.187 true 00:20:14.187 12:52:39 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76720 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76720 ']' 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76720 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76720 00:20:14.187 killing process with pid 76720 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76720' 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76720 00:20:14.187 12:52:39 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76720 00:20:15.132 [2024-11-20 12:52:40.284030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.284079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.132 [2024-11-20 12:52:40.284090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.132 [2024-11-20 12:52:40.284098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.284115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.132 [2024-11-20 12:52:40.286186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.286212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.132 [2024-11-20 12:52:40.286223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:20:15.132 [2024-11-20 12:52:40.286230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.286452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.286459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.132 [2024-11-20 12:52:40.286467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:20:15.132 [2024-11-20 12:52:40.286472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.289621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.289645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.132 [2024-11-20 12:52:40.289656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.132 ms 00:20:15.132 [2024-11-20 12:52:40.289662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.294941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.295057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.132 [2024-11-20 12:52:40.295073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.250 ms 00:20:15.132 [2024-11-20 12:52:40.295079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.302431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.302542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.132 [2024-11-20 12:52:40.302558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.292 ms 00:20:15.132 [2024-11-20 12:52:40.302569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.308727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.308760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.132 [2024-11-20 12:52:40.308771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:20:15.132 [2024-11-20 12:52:40.308778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.308885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.308892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.132 [2024-11-20 12:52:40.308900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:15.132 [2024-11-20 12:52:40.308906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.316678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.316790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.132 [2024-11-20 12:52:40.316805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.756 ms 00:20:15.132 [2024-11-20 12:52:40.316810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.324203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.324298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.132 [2024-11-20 12:52:40.324313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.354 ms 00:20:15.132 [2024-11-20 12:52:40.324319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.331292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.331380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.132 [2024-11-20 12:52:40.331396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.945 ms 00:20:15.132 [2024-11-20 12:52:40.331401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.338296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.132 [2024-11-20 12:52:40.338387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.132 [2024-11-20 12:52:40.338402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:20:15.132 [2024-11-20 12:52:40.338408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.132 [2024-11-20 12:52:40.338435] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.132 [2024-11-20 12:52:40.338446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.132 [2024-11-20 12:52:40.338662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.338998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.133 [2024-11-20 12:52:40.339131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.133 [2024-11-20 12:52:40.339141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:20:15.133 [2024-11-20 12:52:40.339152] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.133 [2024-11-20 12:52:40.339161] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.133 [2024-11-20 12:52:40.339166] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.133 [2024-11-20 12:52:40.339174] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.133 [2024-11-20 12:52:40.339179] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.133 [2024-11-20 12:52:40.339186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.133 [2024-11-20 12:52:40.339192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.133 [2024-11-20 12:52:40.339199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.133 [2024-11-20 12:52:40.339203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.133 [2024-11-20 12:52:40.339210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.133 [2024-11-20 12:52:40.339216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.133 [2024-11-20 12:52:40.339223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:20:15.133 [2024-11-20 12:52:40.339229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.133 [2024-11-20 12:52:40.348701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.133 [2024-11-20 12:52:40.348724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.133 [2024-11-20 12:52:40.348735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.444 ms 00:20:15.133 [2024-11-20 12:52:40.348762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.133 [2024-11-20 12:52:40.349046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.133 [2024-11-20 12:52:40.349118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.133 [2024-11-20 12:52:40.349129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:15.133 [2024-11-20 12:52:40.349137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.133 [2024-11-20 12:52:40.383559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.133 [2024-11-20 12:52:40.383586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.134 [2024-11-20 12:52:40.383596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.383604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.384563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.384587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.134 [2024-11-20 12:52:40.384596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.384604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.384642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.384649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.134 [2024-11-20 12:52:40.384658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.384664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.384678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.384684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.134 [2024-11-20 12:52:40.384692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.384697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.443477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.443597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.134 [2024-11-20 12:52:40.443614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.443621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.491185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.134 [2024-11-20 12:52:40.491195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.491203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.491269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.134 [2024-11-20 12:52:40.491281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.491286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.491316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.134 [2024-11-20 12:52:40.491323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.491329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.491405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.134 [2024-11-20 12:52:40.491413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.491419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.491449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.134 [2024-11-20 12:52:40.491457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.491463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.491501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.134 [2024-11-20 12:52:40.491509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.491515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.134 [2024-11-20 12:52:40.491566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.134 [2024-11-20 12:52:40.491573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.134 [2024-11-20 12:52:40.491579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.134 [2024-11-20 12:52:40.491682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 207.633 ms, result 0 00:20:15.705 12:52:40 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:15.705 12:52:40 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:15.705 [2024-11-20 12:52:41.057135] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:20:15.705 [2024-11-20 12:52:41.057258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76773 ] 00:20:15.705 [2024-11-20 12:52:41.213041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.965 [2024-11-20 12:52:41.290869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.226 [2024-11-20 12:52:41.497063] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.226 [2024-11-20 12:52:41.497110] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.226 [2024-11-20 12:52:41.644943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.644980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.226 [2024-11-20 12:52:41.644990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.226 [2024-11-20 12:52:41.644996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.647035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.647064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.226 [2024-11-20 12:52:41.647072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:20:16.226 [2024-11-20 12:52:41.647078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.647132] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.226 [2024-11-20 12:52:41.647651] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.226 [2024-11-20 12:52:41.647668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.647673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.226 [2024-11-20 12:52:41.647680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:20:16.226 [2024-11-20 12:52:41.647686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.648757] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.226 [2024-11-20 12:52:41.658223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.658353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.226 [2024-11-20 12:52:41.658367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.467 ms 00:20:16.226 [2024-11-20 12:52:41.658374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.658435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.658444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.226 [2024-11-20 12:52:41.658450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:16.226 [2024-11-20 12:52:41.658455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.662910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.662994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.226 [2024-11-20 12:52:41.663040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.426 ms 00:20:16.226 [2024-11-20 12:52:41.663057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.663142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.663208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.226 [2024-11-20 12:52:41.663268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:16.226 [2024-11-20 12:52:41.663282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.663310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.663339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.226 [2024-11-20 12:52:41.663354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.226 [2024-11-20 12:52:41.663368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.663393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.226 [2024-11-20 12:52:41.666144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.666227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.226 [2024-11-20 12:52:41.666280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:20:16.226 [2024-11-20 12:52:41.666299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.666338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.666385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.226 [2024-11-20 12:52:41.666402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:16.226 [2024-11-20 12:52:41.666416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.666457] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.226 [2024-11-20 12:52:41.666487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.226 [2024-11-20 12:52:41.666530] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.226 [2024-11-20 12:52:41.666640] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.226 [2024-11-20 12:52:41.666746] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.226 [2024-11-20 12:52:41.666852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.226 [2024-11-20 12:52:41.666876] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.226 [2024-11-20 12:52:41.666899] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.226 [2024-11-20 12:52:41.666925] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.226 [2024-11-20 12:52:41.666946] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.226 [2024-11-20 12:52:41.666960] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.226 [2024-11-20 12:52:41.667009] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.226 [2024-11-20 12:52:41.667026] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.226 [2024-11-20 12:52:41.667041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.667055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.226 [2024-11-20 12:52:41.667069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:20:16.226 [2024-11-20 12:52:41.667083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.667171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.226 [2024-11-20 12:52:41.667197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.226 [2024-11-20 12:52:41.667214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:16.226 [2024-11-20 12:52:41.667229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.226 [2024-11-20 12:52:41.667315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.226 [2024-11-20 12:52:41.667333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.226 [2024-11-20 12:52:41.667384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.226 [2024-11-20 12:52:41.667399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.226 [2024-11-20 12:52:41.667427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.226 [2024-11-20 12:52:41.667493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.226 [2024-11-20 12:52:41.667507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.226 [2024-11-20 12:52:41.667542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.226 [2024-11-20 12:52:41.667556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.226 [2024-11-20 12:52:41.667600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.226 [2024-11-20 12:52:41.667623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.226 [2024-11-20 12:52:41.667637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:16.226 [2024-11-20 12:52:41.667650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.226 [2024-11-20 12:52:41.667677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:16.226 [2024-11-20 12:52:41.667690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.226 [2024-11-20 12:52:41.667762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.226 [2024-11-20 12:52:41.667790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.226 [2024-11-20 12:52:41.667803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.226 [2024-11-20 12:52:41.667830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.226 [2024-11-20 12:52:41.667844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.226 [2024-11-20 12:52:41.667900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.226 [2024-11-20 12:52:41.667915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:16.226 [2024-11-20 12:52:41.667929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.226 [2024-11-20 12:52:41.667943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.227 [2024-11-20 12:52:41.667956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:16.227 [2024-11-20 12:52:41.667970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.227 [2024-11-20 12:52:41.667983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.227 [2024-11-20 12:52:41.668019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:16.227 [2024-11-20 12:52:41.668054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.227 [2024-11-20 12:52:41.668070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.227 [2024-11-20 12:52:41.668114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:16.227 [2024-11-20 12:52:41.668130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.227 [2024-11-20 12:52:41.668144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.227 [2024-11-20 12:52:41.668158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:16.227 [2024-11-20 12:52:41.668172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.227 [2024-11-20 12:52:41.668210] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.227 [2024-11-20 12:52:41.668227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.227 [2024-11-20 12:52:41.668241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.227 [2024-11-20 12:52:41.668259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.227 [2024-11-20 12:52:41.668273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.227 [2024-11-20 12:52:41.668287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.227 [2024-11-20 12:52:41.668324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.227 [2024-11-20 12:52:41.668341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.227 [2024-11-20 12:52:41.668355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.227 [2024-11-20 12:52:41.668395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.227 [2024-11-20 12:52:41.668413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.227 [2024-11-20 12:52:41.668436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.227 [2024-11-20 12:52:41.668477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:16.227 [2024-11-20 12:52:41.668500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:16.227 [2024-11-20 12:52:41.668522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:16.227 [2024-11-20 12:52:41.668570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:16.227 [2024-11-20 12:52:41.668594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:16.227 [2024-11-20 12:52:41.668616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:16.227 [2024-11-20 12:52:41.668638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:16.227 [2024-11-20 12:52:41.668698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:16.227 [2024-11-20 12:52:41.668745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:16.227 [2024-11-20 12:52:41.668768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:16.227 [2024-11-20 12:52:41.668790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:16.227 [2024-11-20 12:52:41.668812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:16.227 [2024-11-20 12:52:41.668833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:16.227 [2024-11-20 12:52:41.668947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:16.227 [2024-11-20 12:52:41.668975] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:16.227 [2024-11-20 12:52:41.669003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:16.227 [2024-11-20 12:52:41.669034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:16.227 [2024-11-20 12:52:41.669060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:16.227 [2024-11-20 12:52:41.669085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:16.227 [2024-11-20 12:52:41.669140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:16.227 [2024-11-20 12:52:41.669166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.227 [2024-11-20 12:52:41.669186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:16.227 [2024-11-20 12:52:41.669207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:20:16.227 [2024-11-20 12:52:41.669224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.227 [2024-11-20 12:52:41.689989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.227 [2024-11-20 12:52:41.690084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.227 [2024-11-20 12:52:41.690123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.697 ms 00:20:16.227 [2024-11-20 12:52:41.690139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.227 [2024-11-20 12:52:41.690240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.227 [2024-11-20 12:52:41.690293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.227 [2024-11-20 12:52:41.690312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:16.227 [2024-11-20 12:52:41.690326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.227 [2024-11-20 12:52:41.730892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.227 [2024-11-20 12:52:41.731001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.227 [2024-11-20 12:52:41.731045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.520 ms 00:20:16.227 [2024-11-20 12:52:41.731067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.227 [2024-11-20 12:52:41.731132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.227 [2024-11-20 12:52:41.731231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.227 [2024-11-20 12:52:41.731258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.227 [2024-11-20 12:52:41.731273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.227 [2024-11-20 12:52:41.731607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.227 [2024-11-20 12:52:41.731683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.227 [2024-11-20 12:52:41.731721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:16.227 [2024-11-20 12:52:41.731747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.227 [2024-11-20 12:52:41.731867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.227 [2024-11-20 12:52:41.731924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.227 [2024-11-20 12:52:41.731962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:16.227 [2024-11-20 12:52:41.731976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.742680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.742780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.488 [2024-11-20 12:52:41.742819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.680 ms 00:20:16.488 [2024-11-20 12:52:41.742836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.752584] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:16.488 [2024-11-20 12:52:41.752686] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:16.488 [2024-11-20 12:52:41.752735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.752844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:16.488 [2024-11-20 12:52:41.752870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.803 ms 00:20:16.488 [2024-11-20 12:52:41.752884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.771398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.771512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:16.488 [2024-11-20 12:52:41.771533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.264 ms 00:20:16.488 [2024-11-20 12:52:41.771540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.780695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.780790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:16.488 [2024-11-20 12:52:41.780832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.101 ms 00:20:16.488 [2024-11-20 12:52:41.780849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.789479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.789565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:16.488 [2024-11-20 12:52:41.789609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.582 ms 00:20:16.488 [2024-11-20 12:52:41.789626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.790096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.790169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.488 [2024-11-20 12:52:41.790208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:20:16.488 [2024-11-20 12:52:41.790224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.833371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.833510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:16.488 [2024-11-20 12:52:41.833552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.118 ms 00:20:16.488 [2024-11-20 12:52:41.833570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.841397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:16.488 [2024-11-20 12:52:41.853015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.853118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.488 [2024-11-20 12:52:41.853156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.334 ms 00:20:16.488 [2024-11-20 12:52:41.853173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.853257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.853277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:16.488 [2024-11-20 12:52:41.853293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:16.488 [2024-11-20 12:52:41.853307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.853354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.853371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.488 [2024-11-20 12:52:41.853386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:16.488 [2024-11-20 12:52:41.853448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.853483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.853503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.488 [2024-11-20 12:52:41.853517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.488 [2024-11-20 12:52:41.853531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.853562] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:16.488 [2024-11-20 12:52:41.853580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.853631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:16.488 [2024-11-20 12:52:41.853649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:16.488 [2024-11-20 12:52:41.853663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.871418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.871509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.488 [2024-11-20 12:52:41.871555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.728 ms 00:20:16.488 [2024-11-20 12:52:41.871572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.871646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.488 [2024-11-20 12:52:41.871695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.488 [2024-11-20 12:52:41.871752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:16.488 [2024-11-20 12:52:41.871769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.488 [2024-11-20 12:52:41.872438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:16.489 [2024-11-20 12:52:41.874771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.277 ms, result 0 00:20:16.489 [2024-11-20 12:52:41.875494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.489 [2024-11-20 12:52:41.890464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.432  [2024-11-20T12:52:44.333Z] Copying: 19/256 [MB] (19 MBps) [2024-11-20T12:52:44.905Z] Copying: 35/256 [MB] (16 MBps) [2024-11-20T12:52:46.290Z] Copying: 46/256 [MB] (10 MBps) [2024-11-20T12:52:47.231Z] Copying: 64/256 [MB] (18 MBps) [2024-11-20T12:52:48.241Z] Copying: 84/256 [MB] (19 MBps) [2024-11-20T12:52:49.211Z] Copying: 103/256 [MB] (19 MBps) [2024-11-20T12:52:50.154Z] Copying: 121/256 [MB] (17 MBps) [2024-11-20T12:52:51.100Z] Copying: 141/256 [MB] (19 MBps) [2024-11-20T12:52:52.041Z] Copying: 161/256 [MB] (20 MBps) [2024-11-20T12:52:52.985Z] Copying: 183/256 [MB] (21 MBps) [2024-11-20T12:52:53.929Z] Copying: 202/256 [MB] (19 MBps) [2024-11-20T12:52:54.875Z] Copying: 235/256 [MB] (32 MBps) [2024-11-20T12:52:54.875Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-20 12:52:54.751375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.356 [2024-11-20 12:52:54.761666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.356 [2024-11-20 12:52:54.761729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.356 [2024-11-20 12:52:54.761767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.356 [2024-11-20 12:52:54.761783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.356 [2024-11-20 12:52:54.761809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:29.356 [2024-11-20 12:52:54.764816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.356 [2024-11-20 12:52:54.764858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.356 [2024-11-20 12:52:54.764869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:20:29.356 [2024-11-20 12:52:54.764878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.356 [2024-11-20 12:52:54.765145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.765157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.357 [2024-11-20 12:52:54.765167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:29.357 [2024-11-20 12:52:54.765175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.357 [2024-11-20 12:52:54.768881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.768911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.357 [2024-11-20 12:52:54.768921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:20:29.357 [2024-11-20 12:52:54.768928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.357 [2024-11-20 12:52:54.775838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.776039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.357 [2024-11-20 12:52:54.776060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:20:29.357 [2024-11-20 12:52:54.776069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.357 [2024-11-20 12:52:54.801414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.801461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.357 [2024-11-20 12:52:54.801474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.276 ms 00:20:29.357 [2024-11-20 12:52:54.801482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.357 [2024-11-20 12:52:54.817216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.817266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.357 [2024-11-20 12:52:54.817279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.682 ms 00:20:29.357 [2024-11-20 12:52:54.817291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.357 [2024-11-20 12:52:54.817442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.817454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.357 [2024-11-20 12:52:54.817463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:29.357 [2024-11-20 12:52:54.817471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.357 [2024-11-20 12:52:54.843359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.843563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:29.357 [2024-11-20 12:52:54.843585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.862 ms 00:20:29.357 [2024-11-20 12:52:54.843593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.357 [2024-11-20 12:52:54.869107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.357 [2024-11-20 12:52:54.869153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:29.357 [2024-11-20 12:52:54.869164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.431 ms 00:20:29.357 [2024-11-20 12:52:54.869171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.621 [2024-11-20 12:52:54.894114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.621 [2024-11-20 12:52:54.894159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.621 [2024-11-20 12:52:54.894171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.863 ms 00:20:29.621 [2024-11-20 12:52:54.894178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.621 [2024-11-20 12:52:54.918784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.621 [2024-11-20 12:52:54.918819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.621 [2024-11-20 12:52:54.918831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.504 ms 00:20:29.621 [2024-11-20 12:52:54.918837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.621 [2024-11-20 12:52:54.918885] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.621 [2024-11-20 12:52:54.918903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.918995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.621 [2024-11-20 12:52:54.919119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.622 [2024-11-20 12:52:54.919688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.622 [2024-11-20 12:52:54.919697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:20:29.622 [2024-11-20 12:52:54.919706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.622 [2024-11-20 12:52:54.919713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.622 [2024-11-20 12:52:54.919721] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.622 [2024-11-20 12:52:54.919729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.623 [2024-11-20 12:52:54.919768] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.623 [2024-11-20 12:52:54.919778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.623 [2024-11-20 12:52:54.919786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.623 [2024-11-20 12:52:54.919792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.623 [2024-11-20 12:52:54.919799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.623 [2024-11-20 12:52:54.919806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.623 [2024-11-20 12:52:54.919817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.623 [2024-11-20 12:52:54.919827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:20:29.623 [2024-11-20 12:52:54.919834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:54.933151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.623 [2024-11-20 12:52:54.933337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.623 [2024-11-20 12:52:54.933356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.282 ms 00:20:29.623 [2024-11-20 12:52:54.933363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:54.933801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.623 [2024-11-20 12:52:54.933817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.623 [2024-11-20 12:52:54.933827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:29.623 [2024-11-20 12:52:54.933835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:54.972847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:54.972896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.623 [2024-11-20 12:52:54.972907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:54.972915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:54.973018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:54.973028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.623 [2024-11-20 12:52:54.973037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:54.973045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:54.973096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:54.973106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.623 [2024-11-20 12:52:54.973114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:54.973121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:54.973138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:54.973149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.623 [2024-11-20 12:52:54.973157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:54.973164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.059731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.059971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.623 [2024-11-20 12:52:55.059996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.060006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.129596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.129660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.623 [2024-11-20 12:52:55.129675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.129684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.129799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.129811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.623 [2024-11-20 12:52:55.129821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.129830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.129864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.129874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.623 [2024-11-20 12:52:55.129887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.129896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.130001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.130011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.623 [2024-11-20 12:52:55.130020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.130028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.130062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.130073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.623 [2024-11-20 12:52:55.130080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.130092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.130137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.130147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.623 [2024-11-20 12:52:55.130155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.130164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.130215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.623 [2024-11-20 12:52:55.130226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.623 [2024-11-20 12:52:55.130238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.623 [2024-11-20 12:52:55.130246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.623 [2024-11-20 12:52:55.130403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.748 ms, result 0 00:20:30.567 00:20:30.567 00:20:30.567 12:52:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:30.567 12:52:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:31.139 12:52:56 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:31.139 [2024-11-20 12:52:56.506494] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:20:31.139 [2024-11-20 12:52:56.506605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76939 ] 00:20:31.399 [2024-11-20 12:52:56.659990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.399 [2024-11-20 12:52:56.758844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.660 [2024-11-20 12:52:57.036404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.660 [2024-11-20 12:52:57.036489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.924 [2024-11-20 12:52:57.196929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.197158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.924 [2024-11-20 12:52:57.197183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.924 [2024-11-20 12:52:57.197194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.200342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.200533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.924 [2024-11-20 12:52:57.200553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:20:31.924 [2024-11-20 12:52:57.200562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.201084] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.924 [2024-11-20 12:52:57.201913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.924 [2024-11-20 12:52:57.201953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.201964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.924 [2024-11-20 12:52:57.201984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:20:31.924 [2024-11-20 12:52:57.201992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.203780] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.924 [2024-11-20 12:52:57.218126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.218179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.924 [2024-11-20 12:52:57.218192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.348 ms 00:20:31.924 [2024-11-20 12:52:57.218201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.218321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.218334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.924 [2024-11-20 12:52:57.218344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:31.924 [2024-11-20 12:52:57.218352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.226491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.226688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.924 [2024-11-20 12:52:57.226708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.094 ms 00:20:31.924 [2024-11-20 12:52:57.226717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.226845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.226857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.924 [2024-11-20 12:52:57.226866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:31.924 [2024-11-20 12:52:57.226874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.226904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.226917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.924 [2024-11-20 12:52:57.226925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:31.924 [2024-11-20 12:52:57.226933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.226954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:31.924 [2024-11-20 12:52:57.230861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.230899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.924 [2024-11-20 12:52:57.230911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.913 ms 00:20:31.924 [2024-11-20 12:52:57.230919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.924 [2024-11-20 12:52:57.230993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.924 [2024-11-20 12:52:57.231003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.924 [2024-11-20 12:52:57.231013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:31.925 [2024-11-20 12:52:57.231022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.925 [2024-11-20 12:52:57.231042] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.925 [2024-11-20 12:52:57.231066] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.925 [2024-11-20 12:52:57.231104] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.925 [2024-11-20 12:52:57.231121] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.925 [2024-11-20 12:52:57.231228] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.925 [2024-11-20 12:52:57.231240] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.925 [2024-11-20 12:52:57.231251] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.925 [2024-11-20 12:52:57.231262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231274] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231282] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:31.925 [2024-11-20 12:52:57.231291] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.925 [2024-11-20 12:52:57.231298] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.925 [2024-11-20 12:52:57.231306] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.925 [2024-11-20 12:52:57.231314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.925 [2024-11-20 12:52:57.231323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.925 [2024-11-20 12:52:57.231331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:31.925 [2024-11-20 12:52:57.231338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.925 [2024-11-20 12:52:57.231426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.925 [2024-11-20 12:52:57.231436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.925 [2024-11-20 12:52:57.231447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:31.925 [2024-11-20 12:52:57.231454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.925 [2024-11-20 12:52:57.231584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.925 [2024-11-20 12:52:57.231597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.925 [2024-11-20 12:52:57.231606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.925 [2024-11-20 12:52:57.231630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.925 [2024-11-20 12:52:57.231652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.925 [2024-11-20 12:52:57.231667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.925 [2024-11-20 12:52:57.231675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:31.925 [2024-11-20 12:52:57.231681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.925 [2024-11-20 12:52:57.231695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.925 [2024-11-20 12:52:57.231702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:31.925 [2024-11-20 12:52:57.231712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.925 [2024-11-20 12:52:57.231726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.925 [2024-11-20 12:52:57.231773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.925 [2024-11-20 12:52:57.231795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.925 [2024-11-20 12:52:57.231817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.925 [2024-11-20 12:52:57.231838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.925 [2024-11-20 12:52:57.231860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.925 [2024-11-20 12:52:57.231874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.925 [2024-11-20 12:52:57.231881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:31.925 [2024-11-20 12:52:57.231888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.925 [2024-11-20 12:52:57.231894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.925 [2024-11-20 12:52:57.231901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:31.925 [2024-11-20 12:52:57.231908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.925 [2024-11-20 12:52:57.231922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:31.925 [2024-11-20 12:52:57.231930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231936] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.925 [2024-11-20 12:52:57.231945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.925 [2024-11-20 12:52:57.231953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.925 [2024-11-20 12:52:57.231963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.925 [2024-11-20 12:52:57.231973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.925 [2024-11-20 12:52:57.231980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.925 [2024-11-20 12:52:57.231987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.925 [2024-11-20 12:52:57.231994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.925 [2024-11-20 12:52:57.232001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.925 [2024-11-20 12:52:57.232007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.925 [2024-11-20 12:52:57.232016] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.925 [2024-11-20 12:52:57.232026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.925 [2024-11-20 12:52:57.232034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:31.925 [2024-11-20 12:52:57.232042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:31.925 [2024-11-20 12:52:57.232050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:31.925 [2024-11-20 12:52:57.232057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:31.925 [2024-11-20 12:52:57.232078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:31.925 [2024-11-20 12:52:57.232086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:31.925 [2024-11-20 12:52:57.232093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:31.925 [2024-11-20 12:52:57.232100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:31.925 [2024-11-20 12:52:57.232109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:31.925 [2024-11-20 12:52:57.232116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:31.925 [2024-11-20 12:52:57.232124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:31.925 [2024-11-20 12:52:57.232131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:31.925 [2024-11-20 12:52:57.232138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:31.925 [2024-11-20 12:52:57.232145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:31.925 [2024-11-20 12:52:57.232153] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.925 [2024-11-20 12:52:57.232162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.925 [2024-11-20 12:52:57.232170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.925 [2024-11-20 12:52:57.232177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.925 [2024-11-20 12:52:57.232184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.925 [2024-11-20 12:52:57.232191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.926 [2024-11-20 12:52:57.232199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.232207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.926 [2024-11-20 12:52:57.232218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:20:31.926 [2024-11-20 12:52:57.232225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.264250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.264297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.926 [2024-11-20 12:52:57.264308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.969 ms 00:20:31.926 [2024-11-20 12:52:57.264316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.264449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.264464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.926 [2024-11-20 12:52:57.264473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:31.926 [2024-11-20 12:52:57.264482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.311871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.311923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.926 [2024-11-20 12:52:57.311936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.367 ms 00:20:31.926 [2024-11-20 12:52:57.311949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.312062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.312075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.926 [2024-11-20 12:52:57.312085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.926 [2024-11-20 12:52:57.312093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.312611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.312634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.926 [2024-11-20 12:52:57.312644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:20:31.926 [2024-11-20 12:52:57.312663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.312856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.312869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.926 [2024-11-20 12:52:57.312878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:20:31.926 [2024-11-20 12:52:57.312909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.329187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.329233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.926 [2024-11-20 12:52:57.329244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.253 ms 00:20:31.926 [2024-11-20 12:52:57.329252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.343607] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:31.926 [2024-11-20 12:52:57.343656] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.926 [2024-11-20 12:52:57.343670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.343679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.926 [2024-11-20 12:52:57.343688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.309 ms 00:20:31.926 [2024-11-20 12:52:57.343696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.369325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.369401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.926 [2024-11-20 12:52:57.369415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.508 ms 00:20:31.926 [2024-11-20 12:52:57.369423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.382617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.382666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.926 [2024-11-20 12:52:57.382678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.097 ms 00:20:31.926 [2024-11-20 12:52:57.382686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.395504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.395548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.926 [2024-11-20 12:52:57.395570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:20:31.926 [2024-11-20 12:52:57.395578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.926 [2024-11-20 12:52:57.396258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.926 [2024-11-20 12:52:57.396284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.926 [2024-11-20 12:52:57.396294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:20:31.926 [2024-11-20 12:52:57.396302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.188 [2024-11-20 12:52:57.462690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.188 [2024-11-20 12:52:57.462775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:32.188 [2024-11-20 12:52:57.462793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.358 ms 00:20:32.188 [2024-11-20 12:52:57.462803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.188 [2024-11-20 12:52:57.474209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:32.188 [2024-11-20 12:52:57.493885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.188 [2024-11-20 12:52:57.493946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.188 [2024-11-20 12:52:57.493966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.949 ms 00:20:32.189 [2024-11-20 12:52:57.493980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.189 [2024-11-20 12:52:57.494112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.189 [2024-11-20 12:52:57.494126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.189 [2024-11-20 12:52:57.494137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:32.189 [2024-11-20 12:52:57.494146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.189 [2024-11-20 12:52:57.494205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.189 [2024-11-20 12:52:57.494215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.189 [2024-11-20 12:52:57.494225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:32.189 [2024-11-20 12:52:57.494233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.189 [2024-11-20 12:52:57.494261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.189 [2024-11-20 12:52:57.494273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.189 [2024-11-20 12:52:57.494282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:32.189 [2024-11-20 12:52:57.494290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.189 [2024-11-20 12:52:57.494330] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.189 [2024-11-20 12:52:57.494342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.189 [2024-11-20 12:52:57.494352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.189 [2024-11-20 12:52:57.494361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:32.189 [2024-11-20 12:52:57.494369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.189 [2024-11-20 12:52:57.520649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.189 [2024-11-20 12:52:57.520698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.189 [2024-11-20 12:52:57.520712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.256 ms 00:20:32.189 [2024-11-20 12:52:57.520720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.189 [2024-11-20 12:52:57.520913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.189 [2024-11-20 12:52:57.520927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.189 [2024-11-20 12:52:57.520938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:32.189 [2024-11-20 12:52:57.520947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.189 [2024-11-20 12:52:57.522677] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.189 [2024-11-20 12:52:57.526096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.420 ms, result 0 00:20:32.189 [2024-11-20 12:52:57.527305] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.189 [2024-11-20 12:52:57.540908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.452  [2024-11-20T12:52:57.971Z] Copying: 4096/4096 [kB] (average 16 MBps)[2024-11-20 12:52:57.788604] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.452 [2024-11-20 12:52:57.797673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.797719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:32.452 [2024-11-20 12:52:57.797734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:32.452 [2024-11-20 12:52:57.797769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.797793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:32.452 [2024-11-20 12:52:57.800779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.800821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:32.452 [2024-11-20 12:52:57.800832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:20:32.452 [2024-11-20 12:52:57.800840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.803574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.803757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:32.452 [2024-11-20 12:52:57.803778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:20:32.452 [2024-11-20 12:52:57.803786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.808100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.808143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:32.452 [2024-11-20 12:52:57.808155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:20:32.452 [2024-11-20 12:52:57.808162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.815109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.815150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:32.452 [2024-11-20 12:52:57.815162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:20:32.452 [2024-11-20 12:52:57.815169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.840399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.840445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.452 [2024-11-20 12:52:57.840459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.181 ms 00:20:32.452 [2024-11-20 12:52:57.840467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.856046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.856099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.452 [2024-11-20 12:52:57.856116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.529 ms 00:20:32.452 [2024-11-20 12:52:57.856124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.856280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.856291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.452 [2024-11-20 12:52:57.856301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:32.452 [2024-11-20 12:52:57.856308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.882051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.882242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:32.452 [2024-11-20 12:52:57.882262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.716 ms 00:20:32.452 [2024-11-20 12:52:57.882269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.907480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.907522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:32.452 [2024-11-20 12:52:57.907535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.114 ms 00:20:32.452 [2024-11-20 12:52:57.907541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.931650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.931693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.452 [2024-11-20 12:52:57.931704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.050 ms 00:20:32.452 [2024-11-20 12:52:57.931711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.955917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.452 [2024-11-20 12:52:57.955961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.452 [2024-11-20 12:52:57.955972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.082 ms 00:20:32.452 [2024-11-20 12:52:57.955979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.452 [2024-11-20 12:52:57.956027] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.452 [2024-11-20 12:52:57.956042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.453 [2024-11-20 12:52:57.956717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.454 [2024-11-20 12:52:57.956845] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.454 [2024-11-20 12:52:57.956854] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:20:32.454 [2024-11-20 12:52:57.956863] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:32.454 [2024-11-20 12:52:57.956871] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.454 [2024-11-20 12:52:57.956878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.454 [2024-11-20 12:52:57.956886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.454 [2024-11-20 12:52:57.956913] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.454 [2024-11-20 12:52:57.956923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.454 [2024-11-20 12:52:57.956931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.454 [2024-11-20 12:52:57.956938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.454 [2024-11-20 12:52:57.956944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.454 [2024-11-20 12:52:57.956952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.454 [2024-11-20 12:52:57.956964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.454 [2024-11-20 12:52:57.956972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:20:32.454 [2024-11-20 12:52:57.956980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:57.970229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.715 [2024-11-20 12:52:57.970273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.715 [2024-11-20 12:52:57.970284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.216 ms 00:20:32.715 [2024-11-20 12:52:57.970292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:57.970698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.715 [2024-11-20 12:52:57.970716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.715 [2024-11-20 12:52:57.970726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:20:32.715 [2024-11-20 12:52:57.970733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:58.009424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.715 [2024-11-20 12:52:58.009474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.715 [2024-11-20 12:52:58.009485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.715 [2024-11-20 12:52:58.009494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:58.009573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.715 [2024-11-20 12:52:58.009581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.715 [2024-11-20 12:52:58.009589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.715 [2024-11-20 12:52:58.009597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:58.009653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.715 [2024-11-20 12:52:58.009663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.715 [2024-11-20 12:52:58.009671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.715 [2024-11-20 12:52:58.009679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:58.009697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.715 [2024-11-20 12:52:58.009710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.715 [2024-11-20 12:52:58.009718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.715 [2024-11-20 12:52:58.009726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:58.093597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.715 [2024-11-20 12:52:58.093653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.715 [2024-11-20 12:52:58.093667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.715 [2024-11-20 12:52:58.093676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:58.165766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.715 [2024-11-20 12:52:58.165824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.715 [2024-11-20 12:52:58.165838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.715 [2024-11-20 12:52:58.165848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.715 [2024-11-20 12:52:58.165917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.716 [2024-11-20 12:52:58.165927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.716 [2024-11-20 12:52:58.165937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.716 [2024-11-20 12:52:58.165947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.716 [2024-11-20 12:52:58.165980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.716 [2024-11-20 12:52:58.165990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.716 [2024-11-20 12:52:58.166006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.716 [2024-11-20 12:52:58.166015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.716 [2024-11-20 12:52:58.166113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.716 [2024-11-20 12:52:58.166124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.716 [2024-11-20 12:52:58.166132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.716 [2024-11-20 12:52:58.166141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.716 [2024-11-20 12:52:58.166176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.716 [2024-11-20 12:52:58.166187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:32.716 [2024-11-20 12:52:58.166195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.716 [2024-11-20 12:52:58.166207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.716 [2024-11-20 12:52:58.166251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.716 [2024-11-20 12:52:58.166260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.716 [2024-11-20 12:52:58.166268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.716 [2024-11-20 12:52:58.166277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.716 [2024-11-20 12:52:58.166327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.716 [2024-11-20 12:52:58.166338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.716 [2024-11-20 12:52:58.166349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.716 [2024-11-20 12:52:58.166357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.716 [2024-11-20 12:52:58.166519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.829 ms, result 0 00:20:33.662 00:20:33.662 00:20:33.662 12:52:58 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76964 00:20:33.662 12:52:58 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:33.662 12:52:58 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76964 00:20:33.662 12:52:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76964 ']' 00:20:33.662 12:52:58 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.662 12:52:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.662 12:52:58 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.662 12:52:58 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.662 12:52:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:33.662 [2024-11-20 12:52:59.001834] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:20:33.662 [2024-11-20 12:52:59.001993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76964 ] 00:20:33.662 [2024-11-20 12:52:59.157805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.922 [2024-11-20 12:52:59.275988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.494 12:52:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.494 12:52:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:34.494 12:52:59 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:34.755 [2024-11-20 12:53:00.260282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:34.755 [2024-11-20 12:53:00.260362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.019 [2024-11-20 12:53:00.440149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.440215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.019 [2024-11-20 12:53:00.440233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.019 [2024-11-20 12:53:00.440242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.443269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.443322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.019 [2024-11-20 12:53:00.443335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:20:35.019 [2024-11-20 12:53:00.443343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.443467] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.019 [2024-11-20 12:53:00.444338] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.019 [2024-11-20 12:53:00.444392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.444401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.019 [2024-11-20 12:53:00.444412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:20:35.019 [2024-11-20 12:53:00.444420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.446296] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:35.019 [2024-11-20 12:53:00.460604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.460663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:35.019 [2024-11-20 12:53:00.460678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.317 ms 00:20:35.019 [2024-11-20 12:53:00.460688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.460816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.460832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:35.019 [2024-11-20 12:53:00.460842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:35.019 [2024-11-20 12:53:00.460852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.469089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.469143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.019 [2024-11-20 12:53:00.469153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.181 ms 00:20:35.019 [2024-11-20 12:53:00.469164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.469281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.469295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.019 [2024-11-20 12:53:00.469304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:35.019 [2024-11-20 12:53:00.469314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.469346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.469357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.019 [2024-11-20 12:53:00.469365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:35.019 [2024-11-20 12:53:00.469374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.469399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:35.019 [2024-11-20 12:53:00.473391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.473437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.019 [2024-11-20 12:53:00.473450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.996 ms 00:20:35.019 [2024-11-20 12:53:00.473459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.473538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.473549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.019 [2024-11-20 12:53:00.473574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.019 [2024-11-20 12:53:00.473585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.473610] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:35.019 [2024-11-20 12:53:00.473631] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:35.019 [2024-11-20 12:53:00.473676] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:35.019 [2024-11-20 12:53:00.473692] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:35.019 [2024-11-20 12:53:00.473817] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.019 [2024-11-20 12:53:00.473830] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.019 [2024-11-20 12:53:00.473845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:35.019 [2024-11-20 12:53:00.473860] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.019 [2024-11-20 12:53:00.473871] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.019 [2024-11-20 12:53:00.473879] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:35.019 [2024-11-20 12:53:00.473889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.019 [2024-11-20 12:53:00.473896] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.019 [2024-11-20 12:53:00.473908] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.019 [2024-11-20 12:53:00.473917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.473926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.019 [2024-11-20 12:53:00.473934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:35.019 [2024-11-20 12:53:00.473943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.474032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.019 [2024-11-20 12:53:00.474043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.019 [2024-11-20 12:53:00.474051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:35.019 [2024-11-20 12:53:00.474060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.019 [2024-11-20 12:53:00.474163] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.019 [2024-11-20 12:53:00.474176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.019 [2024-11-20 12:53:00.474184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.019 [2024-11-20 12:53:00.474194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.019 [2024-11-20 12:53:00.474202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.019 [2024-11-20 12:53:00.474211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.019 [2024-11-20 12:53:00.474218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:35.019 [2024-11-20 12:53:00.474232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.019 [2024-11-20 12:53:00.474240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:35.019 [2024-11-20 12:53:00.474249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.019 [2024-11-20 12:53:00.474256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.020 [2024-11-20 12:53:00.474265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:35.020 [2024-11-20 12:53:00.474271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.020 [2024-11-20 12:53:00.474280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.020 [2024-11-20 12:53:00.474289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:35.020 [2024-11-20 12:53:00.474301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.020 [2024-11-20 12:53:00.474318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:35.020 [2024-11-20 12:53:00.474325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.020 [2024-11-20 12:53:00.474347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.020 [2024-11-20 12:53:00.474362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.020 [2024-11-20 12:53:00.474373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.020 [2024-11-20 12:53:00.474389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.020 [2024-11-20 12:53:00.474395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.020 [2024-11-20 12:53:00.474410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.020 [2024-11-20 12:53:00.474418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.020 [2024-11-20 12:53:00.474434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.020 [2024-11-20 12:53:00.474440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.020 [2024-11-20 12:53:00.474457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.020 [2024-11-20 12:53:00.474465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:35.020 [2024-11-20 12:53:00.474472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.020 [2024-11-20 12:53:00.474481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.020 [2024-11-20 12:53:00.474488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:35.020 [2024-11-20 12:53:00.474498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.020 [2024-11-20 12:53:00.474513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:35.020 [2024-11-20 12:53:00.474520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474529] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.020 [2024-11-20 12:53:00.474536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.020 [2024-11-20 12:53:00.474548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.020 [2024-11-20 12:53:00.474555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.020 [2024-11-20 12:53:00.474567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.020 [2024-11-20 12:53:00.474574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.020 [2024-11-20 12:53:00.474583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.020 [2024-11-20 12:53:00.474591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.020 [2024-11-20 12:53:00.474599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.020 [2024-11-20 12:53:00.474606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.020 [2024-11-20 12:53:00.474616] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.020 [2024-11-20 12:53:00.474626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.020 [2024-11-20 12:53:00.474638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:35.020 [2024-11-20 12:53:00.474646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:35.020 [2024-11-20 12:53:00.474656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:35.020 [2024-11-20 12:53:00.474664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:35.020 [2024-11-20 12:53:00.474674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:35.020 [2024-11-20 12:53:00.474681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:35.020 [2024-11-20 12:53:00.474689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:35.020 [2024-11-20 12:53:00.474696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:35.020 [2024-11-20 12:53:00.474706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:35.020 [2024-11-20 12:53:00.474713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:35.020 [2024-11-20 12:53:00.474722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:35.020 [2024-11-20 12:53:00.474729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:35.020 [2024-11-20 12:53:00.474750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:35.020 [2024-11-20 12:53:00.474757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:35.020 [2024-11-20 12:53:00.474766] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.020 [2024-11-20 12:53:00.474775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.020 [2024-11-20 12:53:00.474787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.020 [2024-11-20 12:53:00.474794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.020 [2024-11-20 12:53:00.474803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.020 [2024-11-20 12:53:00.474811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.020 [2024-11-20 12:53:00.474821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.020 [2024-11-20 12:53:00.474828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.020 [2024-11-20 12:53:00.474837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:20:35.020 [2024-11-20 12:53:00.474844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.020 [2024-11-20 12:53:00.507105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.020 [2024-11-20 12:53:00.507160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.020 [2024-11-20 12:53:00.507176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.196 ms 00:20:35.020 [2024-11-20 12:53:00.507184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.020 [2024-11-20 12:53:00.507324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.020 [2024-11-20 12:53:00.507335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.020 [2024-11-20 12:53:00.507346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:35.020 [2024-11-20 12:53:00.507354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.542821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.542865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.282 [2024-11-20 12:53:00.542884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.439 ms 00:20:35.282 [2024-11-20 12:53:00.542892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.542986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.542996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.282 [2024-11-20 12:53:00.543008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.282 [2024-11-20 12:53:00.543015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.543623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.543655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.282 [2024-11-20 12:53:00.543672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:20:35.282 [2024-11-20 12:53:00.543680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.543853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.543865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.282 [2024-11-20 12:53:00.543877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:35.282 [2024-11-20 12:53:00.543885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.562032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.562077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.282 [2024-11-20 12:53:00.562091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.121 ms 00:20:35.282 [2024-11-20 12:53:00.562100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.576415] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.282 [2024-11-20 12:53:00.576467] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.282 [2024-11-20 12:53:00.576483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.576492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.282 [2024-11-20 12:53:00.576504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.263 ms 00:20:35.282 [2024-11-20 12:53:00.576512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.602661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.602715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.282 [2024-11-20 12:53:00.602731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.045 ms 00:20:35.282 [2024-11-20 12:53:00.602747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.615917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.615964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.282 [2024-11-20 12:53:00.615981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.065 ms 00:20:35.282 [2024-11-20 12:53:00.615989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.628754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.628805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.282 [2024-11-20 12:53:00.628819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.671 ms 00:20:35.282 [2024-11-20 12:53:00.628827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.629485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.629516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.282 [2024-11-20 12:53:00.629529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:35.282 [2024-11-20 12:53:00.629537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.701850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.701925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.282 [2024-11-20 12:53:00.701946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.279 ms 00:20:35.282 [2024-11-20 12:53:00.701955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.713090] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:35.282 [2024-11-20 12:53:00.732061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.732127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.282 [2024-11-20 12:53:00.732144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.990 ms 00:20:35.282 [2024-11-20 12:53:00.732154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.732249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.732264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.282 [2024-11-20 12:53:00.732273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.282 [2024-11-20 12:53:00.732283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.732340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.282 [2024-11-20 12:53:00.732352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.282 [2024-11-20 12:53:00.732361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:35.282 [2024-11-20 12:53:00.732372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.282 [2024-11-20 12:53:00.732399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.283 [2024-11-20 12:53:00.732410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.283 [2024-11-20 12:53:00.732418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:35.283 [2024-11-20 12:53:00.732431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.283 [2024-11-20 12:53:00.732466] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.283 [2024-11-20 12:53:00.732480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.283 [2024-11-20 12:53:00.732488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.283 [2024-11-20 12:53:00.732502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:35.283 [2024-11-20 12:53:00.732510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.283 [2024-11-20 12:53:00.758799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.283 [2024-11-20 12:53:00.758851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.283 [2024-11-20 12:53:00.758869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.260 ms 00:20:35.283 [2024-11-20 12:53:00.758878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.283 [2024-11-20 12:53:00.759019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.283 [2024-11-20 12:53:00.759031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.283 [2024-11-20 12:53:00.759043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:35.283 [2024-11-20 12:53:00.759056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.283 [2024-11-20 12:53:00.760218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.283 [2024-11-20 12:53:00.763776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.749 ms, result 0 00:20:35.283 [2024-11-20 12:53:00.765914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.283 Some configs were skipped because the RPC state that can call them passed over. 00:20:35.545 12:53:00 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:35.545 [2024-11-20 12:53:01.014595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.545 [2024-11-20 12:53:01.014667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:35.545 [2024-11-20 12:53:01.014682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:20:35.545 [2024-11-20 12:53:01.014694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.545 [2024-11-20 12:53:01.014730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.077 ms, result 0 00:20:35.545 true 00:20:35.545 12:53:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:35.807 [2024-11-20 12:53:01.222254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.807 [2024-11-20 12:53:01.222310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:35.807 [2024-11-20 12:53:01.222326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:20:35.807 [2024-11-20 12:53:01.222335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.807 [2024-11-20 12:53:01.222374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.475 ms, result 0 00:20:35.807 true 00:20:35.807 12:53:01 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76964 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76964 ']' 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76964 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76964 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.807 killing process with pid 76964 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76964' 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76964 00:20:35.807 12:53:01 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76964 00:20:36.382 [2024-11-20 12:53:01.820032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.382 [2024-11-20 12:53:01.820083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:36.382 [2024-11-20 12:53:01.820093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:36.382 [2024-11-20 12:53:01.820101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.382 [2024-11-20 12:53:01.820119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:36.382 [2024-11-20 12:53:01.822281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.382 [2024-11-20 12:53:01.822309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:36.382 [2024-11-20 12:53:01.822320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:20:36.382 [2024-11-20 12:53:01.822326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.382 [2024-11-20 12:53:01.822559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.382 [2024-11-20 12:53:01.822572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:36.382 [2024-11-20 12:53:01.822581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:20:36.382 [2024-11-20 12:53:01.822586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.382 [2024-11-20 12:53:01.825791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.382 [2024-11-20 12:53:01.825818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:36.382 [2024-11-20 12:53:01.825829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:20:36.382 [2024-11-20 12:53:01.825835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.382 [2024-11-20 12:53:01.831439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.382 [2024-11-20 12:53:01.831478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:36.382 [2024-11-20 12:53:01.831488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.576 ms 00:20:36.382 [2024-11-20 12:53:01.831493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.382 [2024-11-20 12:53:01.838898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.382 [2024-11-20 12:53:01.838925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:36.382 [2024-11-20 12:53:01.838935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.348 ms 00:20:36.383 [2024-11-20 12:53:01.838946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.383 [2024-11-20 12:53:01.845360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.383 [2024-11-20 12:53:01.845390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:36.383 [2024-11-20 12:53:01.845401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.382 ms 00:20:36.383 [2024-11-20 12:53:01.845408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.383 [2024-11-20 12:53:01.845516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.383 [2024-11-20 12:53:01.845524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:36.383 [2024-11-20 12:53:01.845531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:36.383 [2024-11-20 12:53:01.845537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.383 [2024-11-20 12:53:01.853422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.383 [2024-11-20 12:53:01.853449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:36.383 [2024-11-20 12:53:01.853457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.864 ms 00:20:36.383 [2024-11-20 12:53:01.853463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.383 [2024-11-20 12:53:01.860862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.383 [2024-11-20 12:53:01.860889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:36.383 [2024-11-20 12:53:01.860898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.369 ms 00:20:36.383 [2024-11-20 12:53:01.860904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.383 [2024-11-20 12:53:01.868120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.383 [2024-11-20 12:53:01.868146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:36.383 [2024-11-20 12:53:01.868154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.185 ms 00:20:36.383 [2024-11-20 12:53:01.868159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.383 [2024-11-20 12:53:01.875069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.383 [2024-11-20 12:53:01.875095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:36.383 [2024-11-20 12:53:01.875103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.858 ms 00:20:36.383 [2024-11-20 12:53:01.875108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.383 [2024-11-20 12:53:01.875136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:36.383 [2024-11-20 12:53:01.875147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:36.383 [2024-11-20 12:53:01.875596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:36.384 [2024-11-20 12:53:01.875821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:36.384 [2024-11-20 12:53:01.875831] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:20:36.384 [2024-11-20 12:53:01.875841] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:36.384 [2024-11-20 12:53:01.875850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:36.384 [2024-11-20 12:53:01.875855] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:36.384 [2024-11-20 12:53:01.875862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:36.384 [2024-11-20 12:53:01.875867] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:36.384 [2024-11-20 12:53:01.875874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:36.384 [2024-11-20 12:53:01.875880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:36.384 [2024-11-20 12:53:01.875886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:36.384 [2024-11-20 12:53:01.875891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:36.384 [2024-11-20 12:53:01.875897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.384 [2024-11-20 12:53:01.875903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:36.384 [2024-11-20 12:53:01.875911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:20:36.384 [2024-11-20 12:53:01.875916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.384 [2024-11-20 12:53:01.885565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.384 [2024-11-20 12:53:01.885590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:36.384 [2024-11-20 12:53:01.885600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.630 ms 00:20:36.384 [2024-11-20 12:53:01.885606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.384 [2024-11-20 12:53:01.885907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.384 [2024-11-20 12:53:01.885924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:36.384 [2024-11-20 12:53:01.885932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:36.384 [2024-11-20 12:53:01.885940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:01.921388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:01.921417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.646 [2024-11-20 12:53:01.921426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:01.921432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:01.921505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:01.921513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.646 [2024-11-20 12:53:01.921521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:01.921528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:01.921560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:01.921568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.646 [2024-11-20 12:53:01.921577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:01.921583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:01.921597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:01.921602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.646 [2024-11-20 12:53:01.921610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:01.921615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:01.981055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:01.981087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.646 [2024-11-20 12:53:01.981097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:01.981103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.029931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:02.029964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.646 [2024-11-20 12:53:02.029974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:02.029982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.030041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:02.030048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.646 [2024-11-20 12:53:02.030058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:02.030064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.030087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:02.030093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.646 [2024-11-20 12:53:02.030100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:02.030105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.030173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:02.030180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.646 [2024-11-20 12:53:02.030188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:02.030193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.030217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:02.030224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:36.646 [2024-11-20 12:53:02.030231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:02.030237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.030266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:02.030274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.646 [2024-11-20 12:53:02.030282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:02.030289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.030322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.646 [2024-11-20 12:53:02.030329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.646 [2024-11-20 12:53:02.030336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.646 [2024-11-20 12:53:02.030341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.646 [2024-11-20 12:53:02.030446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.396 ms, result 0 00:20:37.219 12:53:02 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:37.219 [2024-11-20 12:53:02.599518] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:20:37.219 [2024-11-20 12:53:02.599660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77017 ] 00:20:37.481 [2024-11-20 12:53:02.757053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.481 [2024-11-20 12:53:02.835192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.749 [2024-11-20 12:53:03.040384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:37.749 [2024-11-20 12:53:03.040435] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:37.749 [2024-11-20 12:53:03.192038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.192075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:37.749 [2024-11-20 12:53:03.192086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:37.749 [2024-11-20 12:53:03.192092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.194151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.194182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.749 [2024-11-20 12:53:03.194190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:20:37.749 [2024-11-20 12:53:03.194196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.194251] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:37.749 [2024-11-20 12:53:03.194761] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:37.749 [2024-11-20 12:53:03.194779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.194785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.749 [2024-11-20 12:53:03.194792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:20:37.749 [2024-11-20 12:53:03.194798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.195774] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:37.749 [2024-11-20 12:53:03.205233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.205265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:37.749 [2024-11-20 12:53:03.205273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.460 ms 00:20:37.749 [2024-11-20 12:53:03.205279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.205351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.205360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:37.749 [2024-11-20 12:53:03.205367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:37.749 [2024-11-20 12:53:03.205373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.209827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.209853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.749 [2024-11-20 12:53:03.209861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.425 ms 00:20:37.749 [2024-11-20 12:53:03.209866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.209937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.209945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.749 [2024-11-20 12:53:03.209951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:37.749 [2024-11-20 12:53:03.209957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.209972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.209981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:37.749 [2024-11-20 12:53:03.209987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.749 [2024-11-20 12:53:03.209992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.749 [2024-11-20 12:53:03.210009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:37.749 [2024-11-20 12:53:03.212619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.749 [2024-11-20 12:53:03.212645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.750 [2024-11-20 12:53:03.212653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:20:37.750 [2024-11-20 12:53:03.212659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.750 [2024-11-20 12:53:03.212685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.750 [2024-11-20 12:53:03.212692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:37.750 [2024-11-20 12:53:03.212698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:37.750 [2024-11-20 12:53:03.212707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.750 [2024-11-20 12:53:03.212720] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:37.750 [2024-11-20 12:53:03.212735] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:37.750 [2024-11-20 12:53:03.212772] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:37.750 [2024-11-20 12:53:03.212783] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:37.750 [2024-11-20 12:53:03.212861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:37.750 [2024-11-20 12:53:03.212872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:37.750 [2024-11-20 12:53:03.212881] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:37.750 [2024-11-20 12:53:03.212889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:37.750 [2024-11-20 12:53:03.212898] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:37.750 [2024-11-20 12:53:03.212908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:37.750 [2024-11-20 12:53:03.212913] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:37.750 [2024-11-20 12:53:03.212919] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:37.750 [2024-11-20 12:53:03.212927] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:37.750 [2024-11-20 12:53:03.212933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.750 [2024-11-20 12:53:03.212939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:37.750 [2024-11-20 12:53:03.212944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:20:37.750 [2024-11-20 12:53:03.212949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.750 [2024-11-20 12:53:03.213018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.750 [2024-11-20 12:53:03.213024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:37.750 [2024-11-20 12:53:03.213032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:37.750 [2024-11-20 12:53:03.213038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.750 [2024-11-20 12:53:03.213113] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:37.750 [2024-11-20 12:53:03.213126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:37.750 [2024-11-20 12:53:03.213132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:37.750 [2024-11-20 12:53:03.213149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:37.750 [2024-11-20 12:53:03.213165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.750 [2024-11-20 12:53:03.213175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:37.750 [2024-11-20 12:53:03.213181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:37.750 [2024-11-20 12:53:03.213185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.750 [2024-11-20 12:53:03.213195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:37.750 [2024-11-20 12:53:03.213200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:37.750 [2024-11-20 12:53:03.213205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:37.750 [2024-11-20 12:53:03.213216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:37.750 [2024-11-20 12:53:03.213231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:37.750 [2024-11-20 12:53:03.213245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:37.750 [2024-11-20 12:53:03.213259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:37.750 [2024-11-20 12:53:03.213273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:37.750 [2024-11-20 12:53:03.213287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.750 [2024-11-20 12:53:03.213297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:37.750 [2024-11-20 12:53:03.213302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:37.750 [2024-11-20 12:53:03.213307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.750 [2024-11-20 12:53:03.213312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:37.750 [2024-11-20 12:53:03.213317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:37.750 [2024-11-20 12:53:03.213321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:37.750 [2024-11-20 12:53:03.213331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:37.750 [2024-11-20 12:53:03.213336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213341] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:37.750 [2024-11-20 12:53:03.213346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:37.750 [2024-11-20 12:53:03.213352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.750 [2024-11-20 12:53:03.213364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:37.750 [2024-11-20 12:53:03.213370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:37.750 [2024-11-20 12:53:03.213375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:37.750 [2024-11-20 12:53:03.213381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:37.750 [2024-11-20 12:53:03.213385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:37.750 [2024-11-20 12:53:03.213390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:37.750 [2024-11-20 12:53:03.213396] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:37.750 [2024-11-20 12:53:03.213403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.750 [2024-11-20 12:53:03.213409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:37.750 [2024-11-20 12:53:03.213414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:37.750 [2024-11-20 12:53:03.213420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:37.750 [2024-11-20 12:53:03.213426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:37.750 [2024-11-20 12:53:03.213431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:37.750 [2024-11-20 12:53:03.213436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:37.750 [2024-11-20 12:53:03.213441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:37.750 [2024-11-20 12:53:03.213446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:37.750 [2024-11-20 12:53:03.213452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:37.750 [2024-11-20 12:53:03.213457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:37.750 [2024-11-20 12:53:03.213463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:37.750 [2024-11-20 12:53:03.213469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:37.750 [2024-11-20 12:53:03.213474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:37.750 [2024-11-20 12:53:03.213479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:37.750 [2024-11-20 12:53:03.213484] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:37.750 [2024-11-20 12:53:03.213491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.751 [2024-11-20 12:53:03.213497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:37.751 [2024-11-20 12:53:03.213502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:37.751 [2024-11-20 12:53:03.213507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:37.751 [2024-11-20 12:53:03.213513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:37.751 [2024-11-20 12:53:03.213518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.751 [2024-11-20 12:53:03.213523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:37.751 [2024-11-20 12:53:03.213531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:20:37.751 [2024-11-20 12:53:03.213536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.751 [2024-11-20 12:53:03.234445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.751 [2024-11-20 12:53:03.234472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.751 [2024-11-20 12:53:03.234480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.872 ms 00:20:37.751 [2024-11-20 12:53:03.234486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.751 [2024-11-20 12:53:03.234581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.751 [2024-11-20 12:53:03.234591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:37.751 [2024-11-20 12:53:03.234598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:37.751 [2024-11-20 12:53:03.234604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.274478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.274512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.020 [2024-11-20 12:53:03.274521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.858 ms 00:20:38.020 [2024-11-20 12:53:03.274530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.274591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.274600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.020 [2024-11-20 12:53:03.274607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:38.020 [2024-11-20 12:53:03.274613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.274918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.274932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.020 [2024-11-20 12:53:03.274938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:38.020 [2024-11-20 12:53:03.274944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.275052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.275059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.020 [2024-11-20 12:53:03.275065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:38.020 [2024-11-20 12:53:03.275071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.285831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.285857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.020 [2024-11-20 12:53:03.285865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.744 ms 00:20:38.020 [2024-11-20 12:53:03.285871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.295601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:38.020 [2024-11-20 12:53:03.295630] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:38.020 [2024-11-20 12:53:03.295639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.295645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:38.020 [2024-11-20 12:53:03.295652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.681 ms 00:20:38.020 [2024-11-20 12:53:03.295658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.314205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.314244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:38.020 [2024-11-20 12:53:03.314253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.499 ms 00:20:38.020 [2024-11-20 12:53:03.314259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.323076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.323103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:38.020 [2024-11-20 12:53:03.323111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.762 ms 00:20:38.020 [2024-11-20 12:53:03.323117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.331867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.331894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:38.020 [2024-11-20 12:53:03.331902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.709 ms 00:20:38.020 [2024-11-20 12:53:03.331908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.332361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.332382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:38.020 [2024-11-20 12:53:03.332390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:20:38.020 [2024-11-20 12:53:03.332396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.376335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.020 [2024-11-20 12:53:03.376373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:38.020 [2024-11-20 12:53:03.376383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.922 ms 00:20:38.020 [2024-11-20 12:53:03.376390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.020 [2024-11-20 12:53:03.384059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:38.021 [2024-11-20 12:53:03.395659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.021 [2024-11-20 12:53:03.395689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:38.021 [2024-11-20 12:53:03.395699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.201 ms 00:20:38.021 [2024-11-20 12:53:03.395705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.021 [2024-11-20 12:53:03.395790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.021 [2024-11-20 12:53:03.395799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:38.021 [2024-11-20 12:53:03.395806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:38.021 [2024-11-20 12:53:03.395812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.021 [2024-11-20 12:53:03.395847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.021 [2024-11-20 12:53:03.395854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:38.021 [2024-11-20 12:53:03.395860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:38.021 [2024-11-20 12:53:03.395866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.021 [2024-11-20 12:53:03.395887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.021 [2024-11-20 12:53:03.395895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:38.021 [2024-11-20 12:53:03.395901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:38.021 [2024-11-20 12:53:03.395907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.021 [2024-11-20 12:53:03.395930] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:38.021 [2024-11-20 12:53:03.395937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.021 [2024-11-20 12:53:03.395943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:38.021 [2024-11-20 12:53:03.395949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:38.021 [2024-11-20 12:53:03.395955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.021 [2024-11-20 12:53:03.413811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.021 [2024-11-20 12:53:03.413838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:38.021 [2024-11-20 12:53:03.413847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.842 ms 00:20:38.021 [2024-11-20 12:53:03.413853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.021 [2024-11-20 12:53:03.413924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.021 [2024-11-20 12:53:03.413933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:38.021 [2024-11-20 12:53:03.413939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:38.021 [2024-11-20 12:53:03.413945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.021 [2024-11-20 12:53:03.416121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.021 [2024-11-20 12:53:03.425542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.182 ms, result 0 00:20:38.021 [2024-11-20 12:53:03.427375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.021 [2024-11-20 12:53:03.440671] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.405  [2024-11-20T12:53:05.869Z] Copying: 17/256 [MB] (17 MBps) [2024-11-20T12:53:06.812Z] Copying: 32/256 [MB] (15 MBps) [2024-11-20T12:53:07.756Z] Copying: 50/256 [MB] (18 MBps) [2024-11-20T12:53:08.702Z] Copying: 64/256 [MB] (13 MBps) [2024-11-20T12:53:09.646Z] Copying: 78/256 [MB] (14 MBps) [2024-11-20T12:53:10.591Z] Copying: 99/256 [MB] (21 MBps) [2024-11-20T12:53:11.532Z] Copying: 116/256 [MB] (17 MBps) [2024-11-20T12:53:12.918Z] Copying: 140/256 [MB] (23 MBps) [2024-11-20T12:53:13.860Z] Copying: 162/256 [MB] (21 MBps) [2024-11-20T12:53:14.803Z] Copying: 182/256 [MB] (20 MBps) [2024-11-20T12:53:15.745Z] Copying: 201/256 [MB] (18 MBps) [2024-11-20T12:53:16.689Z] Copying: 214/256 [MB] (12 MBps) [2024-11-20T12:53:17.635Z] Copying: 227/256 [MB] (13 MBps) [2024-11-20T12:53:18.206Z] Copying: 246/256 [MB] (18 MBps) [2024-11-20T12:53:18.206Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-20 12:53:18.188363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.946 [2024-11-20 12:53:18.203668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.203735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.947 [2024-11-20 12:53:18.203784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:52.947 [2024-11-20 12:53:18.203806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.203845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:52.947 [2024-11-20 12:53:18.206889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.206933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.947 [2024-11-20 12:53:18.206946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:20:52.947 [2024-11-20 12:53:18.206955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.207250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.207262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.947 [2024-11-20 12:53:18.207272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:20:52.947 [2024-11-20 12:53:18.207280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.210996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.211029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.947 [2024-11-20 12:53:18.211039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.698 ms 00:20:52.947 [2024-11-20 12:53:18.211047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.218022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.218068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.947 [2024-11-20 12:53:18.218080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.955 ms 00:20:52.947 [2024-11-20 12:53:18.218088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.244561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.244614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.947 [2024-11-20 12:53:18.244628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.401 ms 00:20:52.947 [2024-11-20 12:53:18.244636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.260793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.260849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.947 [2024-11-20 12:53:18.260880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.086 ms 00:20:52.947 [2024-11-20 12:53:18.260894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.261059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.261071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.947 [2024-11-20 12:53:18.261081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:52.947 [2024-11-20 12:53:18.261090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.287974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.288028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.947 [2024-11-20 12:53:18.288040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.854 ms 00:20:52.947 [2024-11-20 12:53:18.288047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.313876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.313927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.947 [2024-11-20 12:53:18.313940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.759 ms 00:20:52.947 [2024-11-20 12:53:18.313947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.339281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.339331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.947 [2024-11-20 12:53:18.339344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.268 ms 00:20:52.947 [2024-11-20 12:53:18.339353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.364585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.947 [2024-11-20 12:53:18.364634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.947 [2024-11-20 12:53:18.364646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.145 ms 00:20:52.947 [2024-11-20 12:53:18.364653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.947 [2024-11-20 12:53:18.364720] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.947 [2024-11-20 12:53:18.364758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.364994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.365002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.365010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.365017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.365024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.947 [2024-11-20 12:53:18.365031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.948 [2024-11-20 12:53:18.365577] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.948 [2024-11-20 12:53:18.365587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 200ffff6-6870-4bdc-85eb-29aedea7a1b0 00:20:52.948 [2024-11-20 12:53:18.365596] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.948 [2024-11-20 12:53:18.365605] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.948 [2024-11-20 12:53:18.365612] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.948 [2024-11-20 12:53:18.365620] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.948 [2024-11-20 12:53:18.365627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.948 [2024-11-20 12:53:18.365635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.949 [2024-11-20 12:53:18.365643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.949 [2024-11-20 12:53:18.365649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.949 [2024-11-20 12:53:18.365655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.949 [2024-11-20 12:53:18.365663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.949 [2024-11-20 12:53:18.365674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.949 [2024-11-20 12:53:18.365682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:20:52.949 [2024-11-20 12:53:18.365690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.949 [2024-11-20 12:53:18.379139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.949 [2024-11-20 12:53:18.379186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.949 [2024-11-20 12:53:18.379198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.429 ms 00:20:52.949 [2024-11-20 12:53:18.379206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.949 [2024-11-20 12:53:18.379637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.949 [2024-11-20 12:53:18.379661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.949 [2024-11-20 12:53:18.379671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:20:52.949 [2024-11-20 12:53:18.379679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.949 [2024-11-20 12:53:18.418930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.949 [2024-11-20 12:53:18.418983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.949 [2024-11-20 12:53:18.418995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.949 [2024-11-20 12:53:18.419003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.949 [2024-11-20 12:53:18.419116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.949 [2024-11-20 12:53:18.419126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.949 [2024-11-20 12:53:18.419136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.949 [2024-11-20 12:53:18.419144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.949 [2024-11-20 12:53:18.419201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.949 [2024-11-20 12:53:18.419212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.949 [2024-11-20 12:53:18.419220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.949 [2024-11-20 12:53:18.419228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.949 [2024-11-20 12:53:18.419247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.949 [2024-11-20 12:53:18.419259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.949 [2024-11-20 12:53:18.419266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.949 [2024-11-20 12:53:18.419273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.500284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.500322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:53.209 [2024-11-20 12:53:18.500332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.500339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.564362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:53.209 [2024-11-20 12:53:18.564372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.564379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.564455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.209 [2024-11-20 12:53:18.564463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.564470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.564505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.209 [2024-11-20 12:53:18.564514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.564522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.564614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.209 [2024-11-20 12:53:18.564622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.564629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.564666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:53.209 [2024-11-20 12:53:18.564673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.564683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.564726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.209 [2024-11-20 12:53:18.564733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.564756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:53.209 [2024-11-20 12:53:18.564806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.209 [2024-11-20 12:53:18.564816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:53.209 [2024-11-20 12:53:18.564824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.209 [2024-11-20 12:53:18.564947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.336 ms, result 0 00:20:53.820 00:20:53.820 00:20:53.820 12:53:19 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:54.391 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:54.391 12:53:19 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:54.391 12:53:19 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:54.391 12:53:19 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:54.391 12:53:19 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:54.391 12:53:19 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:54.391 12:53:19 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:54.391 12:53:19 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76964 00:20:54.391 12:53:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76964 ']' 00:20:54.391 12:53:19 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76964 00:20:54.391 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76964) - No such process 00:20:54.391 Process with pid 76964 is not found 00:20:54.391 12:53:19 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76964 is not found' 00:20:54.391 00:20:54.391 real 1m10.872s 00:20:54.391 user 1m36.338s 00:20:54.391 sys 0m5.242s 00:20:54.391 ************************************ 00:20:54.391 END TEST ftl_trim 00:20:54.391 ************************************ 00:20:54.391 12:53:19 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.391 12:53:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:54.652 12:53:19 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:54.652 12:53:19 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:54.652 12:53:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.652 12:53:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:54.652 ************************************ 00:20:54.652 START TEST ftl_restore 00:20:54.652 ************************************ 00:20:54.652 12:53:19 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:54.652 * Looking for test storage... 00:20:54.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:54.652 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:54.652 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:54.652 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:54.652 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.652 12:53:20 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:54.652 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.652 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:54.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.652 --rc genhtml_branch_coverage=1 00:20:54.652 --rc genhtml_function_coverage=1 00:20:54.652 --rc genhtml_legend=1 00:20:54.652 --rc geninfo_all_blocks=1 00:20:54.652 --rc geninfo_unexecuted_blocks=1 00:20:54.652 00:20:54.652 ' 00:20:54.652 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:54.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.652 --rc genhtml_branch_coverage=1 00:20:54.652 --rc genhtml_function_coverage=1 00:20:54.652 --rc genhtml_legend=1 00:20:54.653 --rc geninfo_all_blocks=1 00:20:54.653 --rc geninfo_unexecuted_blocks=1 00:20:54.653 00:20:54.653 ' 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:54.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.653 --rc genhtml_branch_coverage=1 00:20:54.653 --rc genhtml_function_coverage=1 00:20:54.653 --rc genhtml_legend=1 00:20:54.653 --rc geninfo_all_blocks=1 00:20:54.653 --rc geninfo_unexecuted_blocks=1 00:20:54.653 00:20:54.653 ' 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:54.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.653 --rc genhtml_branch_coverage=1 00:20:54.653 --rc genhtml_function_coverage=1 00:20:54.653 --rc genhtml_legend=1 00:20:54.653 --rc geninfo_all_blocks=1 00:20:54.653 --rc geninfo_unexecuted_blocks=1 00:20:54.653 00:20:54.653 ' 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.EoTHz2pRrV 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77260 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77260 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77260 ']' 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.653 12:53:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:54.653 12:53:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:54.914 [2024-11-20 12:53:20.220952] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:20:54.914 [2024-11-20 12:53:20.221093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77260 ] 00:20:54.914 [2024-11-20 12:53:20.386801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.176 [2024-11-20 12:53:20.507523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.745 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.745 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:55.745 12:53:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:55.745 12:53:21 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:55.745 12:53:21 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:55.745 12:53:21 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:55.745 12:53:21 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:55.745 12:53:21 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:56.003 12:53:21 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:56.003 12:53:21 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:56.003 12:53:21 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:56.003 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:56.003 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:56.003 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:56.003 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:56.003 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:56.264 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:56.264 { 00:20:56.264 "name": "nvme0n1", 00:20:56.264 "aliases": [ 00:20:56.264 "631501cb-39e3-4bd4-9976-0765ad3b4210" 00:20:56.264 ], 00:20:56.264 "product_name": "NVMe disk", 00:20:56.264 "block_size": 4096, 00:20:56.264 "num_blocks": 1310720, 00:20:56.264 "uuid": "631501cb-39e3-4bd4-9976-0765ad3b4210", 00:20:56.264 "numa_id": -1, 00:20:56.264 "assigned_rate_limits": { 00:20:56.264 "rw_ios_per_sec": 0, 00:20:56.264 "rw_mbytes_per_sec": 0, 00:20:56.264 "r_mbytes_per_sec": 0, 00:20:56.264 "w_mbytes_per_sec": 0 00:20:56.264 }, 00:20:56.264 "claimed": true, 00:20:56.264 "claim_type": "read_many_write_one", 00:20:56.264 "zoned": false, 00:20:56.264 "supported_io_types": { 00:20:56.264 "read": true, 00:20:56.264 "write": true, 00:20:56.264 "unmap": true, 00:20:56.264 "flush": true, 00:20:56.264 "reset": true, 00:20:56.264 "nvme_admin": true, 00:20:56.264 "nvme_io": true, 00:20:56.264 "nvme_io_md": false, 00:20:56.264 "write_zeroes": true, 00:20:56.264 "zcopy": false, 00:20:56.264 "get_zone_info": false, 00:20:56.264 "zone_management": false, 00:20:56.264 "zone_append": false, 00:20:56.264 "compare": true, 00:20:56.264 "compare_and_write": false, 00:20:56.264 "abort": true, 00:20:56.264 "seek_hole": false, 00:20:56.264 "seek_data": false, 00:20:56.264 "copy": true, 00:20:56.264 "nvme_iov_md": false 00:20:56.264 }, 00:20:56.264 "driver_specific": { 00:20:56.264 "nvme": [ 00:20:56.264 { 00:20:56.264 "pci_address": "0000:00:11.0", 00:20:56.264 "trid": { 00:20:56.264 "trtype": "PCIe", 00:20:56.264 "traddr": "0000:00:11.0" 00:20:56.264 }, 00:20:56.264 "ctrlr_data": { 00:20:56.264 "cntlid": 0, 00:20:56.264 "vendor_id": "0x1b36", 00:20:56.264 "model_number": "QEMU NVMe Ctrl", 00:20:56.264 "serial_number": "12341", 00:20:56.264 "firmware_revision": "8.0.0", 00:20:56.264 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:56.264 "oacs": { 00:20:56.264 "security": 0, 00:20:56.264 "format": 1, 00:20:56.264 "firmware": 0, 00:20:56.264 "ns_manage": 1 00:20:56.264 }, 00:20:56.264 "multi_ctrlr": false, 00:20:56.264 "ana_reporting": false 00:20:56.264 }, 00:20:56.264 "vs": { 00:20:56.264 "nvme_version": "1.4" 00:20:56.264 }, 00:20:56.264 "ns_data": { 00:20:56.264 "id": 1, 00:20:56.264 "can_share": false 00:20:56.264 } 00:20:56.264 } 00:20:56.264 ], 00:20:56.264 "mp_policy": "active_passive" 00:20:56.264 } 00:20:56.264 } 00:20:56.264 ]' 00:20:56.264 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:56.264 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:56.264 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:56.264 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:56.264 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:56.264 12:53:21 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:56.264 12:53:21 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:56.264 12:53:21 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:56.264 12:53:21 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:56.264 12:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:56.264 12:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:56.525 12:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7d01b25a-0873-4026-91a8-9e3080038c05 00:20:56.525 12:53:21 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:56.525 12:53:21 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d01b25a-0873-4026-91a8-9e3080038c05 00:20:56.785 12:53:22 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=8de3f431-add8-43da-957f-ad5c710e8b62 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8de3f431-add8-43da-957f-ad5c710e8b62 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:57.047 12:53:22 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.047 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.047 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:57.047 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:57.047 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:57.047 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.307 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:57.307 { 00:20:57.307 "name": "b0d1e2ed-925a-4230-922d-e834c715757c", 00:20:57.307 "aliases": [ 00:20:57.307 "lvs/nvme0n1p0" 00:20:57.307 ], 00:20:57.307 "product_name": "Logical Volume", 00:20:57.307 "block_size": 4096, 00:20:57.307 "num_blocks": 26476544, 00:20:57.307 "uuid": "b0d1e2ed-925a-4230-922d-e834c715757c", 00:20:57.307 "assigned_rate_limits": { 00:20:57.307 "rw_ios_per_sec": 0, 00:20:57.307 "rw_mbytes_per_sec": 0, 00:20:57.307 "r_mbytes_per_sec": 0, 00:20:57.307 "w_mbytes_per_sec": 0 00:20:57.307 }, 00:20:57.307 "claimed": false, 00:20:57.307 "zoned": false, 00:20:57.307 "supported_io_types": { 00:20:57.307 "read": true, 00:20:57.307 "write": true, 00:20:57.307 "unmap": true, 00:20:57.307 "flush": false, 00:20:57.307 "reset": true, 00:20:57.307 "nvme_admin": false, 00:20:57.307 "nvme_io": false, 00:20:57.308 "nvme_io_md": false, 00:20:57.308 "write_zeroes": true, 00:20:57.308 "zcopy": false, 00:20:57.308 "get_zone_info": false, 00:20:57.308 "zone_management": false, 00:20:57.308 "zone_append": false, 00:20:57.308 "compare": false, 00:20:57.308 "compare_and_write": false, 00:20:57.308 "abort": false, 00:20:57.308 "seek_hole": true, 00:20:57.308 "seek_data": true, 00:20:57.308 "copy": false, 00:20:57.308 "nvme_iov_md": false 00:20:57.308 }, 00:20:57.308 "driver_specific": { 00:20:57.308 "lvol": { 00:20:57.308 "lvol_store_uuid": "8de3f431-add8-43da-957f-ad5c710e8b62", 00:20:57.308 "base_bdev": "nvme0n1", 00:20:57.308 "thin_provision": true, 00:20:57.308 "num_allocated_clusters": 0, 00:20:57.308 "snapshot": false, 00:20:57.308 "clone": false, 00:20:57.308 "esnap_clone": false 00:20:57.308 } 00:20:57.308 } 00:20:57.308 } 00:20:57.308 ]' 00:20:57.308 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:57.308 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:57.308 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:57.308 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:57.308 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:57.308 12:53:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:57.308 12:53:22 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:57.308 12:53:22 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:57.308 12:53:22 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:57.875 12:53:23 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:57.875 12:53:23 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:57.875 12:53:23 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0d1e2ed-925a-4230-922d-e834c715757c 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:57.875 { 00:20:57.875 "name": "b0d1e2ed-925a-4230-922d-e834c715757c", 00:20:57.875 "aliases": [ 00:20:57.875 "lvs/nvme0n1p0" 00:20:57.875 ], 00:20:57.875 "product_name": "Logical Volume", 00:20:57.875 "block_size": 4096, 00:20:57.875 "num_blocks": 26476544, 00:20:57.875 "uuid": "b0d1e2ed-925a-4230-922d-e834c715757c", 00:20:57.875 "assigned_rate_limits": { 00:20:57.875 "rw_ios_per_sec": 0, 00:20:57.875 "rw_mbytes_per_sec": 0, 00:20:57.875 "r_mbytes_per_sec": 0, 00:20:57.875 "w_mbytes_per_sec": 0 00:20:57.875 }, 00:20:57.875 "claimed": false, 00:20:57.875 "zoned": false, 00:20:57.875 "supported_io_types": { 00:20:57.875 "read": true, 00:20:57.875 "write": true, 00:20:57.875 "unmap": true, 00:20:57.875 "flush": false, 00:20:57.875 "reset": true, 00:20:57.875 "nvme_admin": false, 00:20:57.875 "nvme_io": false, 00:20:57.875 "nvme_io_md": false, 00:20:57.875 "write_zeroes": true, 00:20:57.875 "zcopy": false, 00:20:57.875 "get_zone_info": false, 00:20:57.875 "zone_management": false, 00:20:57.875 "zone_append": false, 00:20:57.875 "compare": false, 00:20:57.875 "compare_and_write": false, 00:20:57.875 "abort": false, 00:20:57.875 "seek_hole": true, 00:20:57.875 "seek_data": true, 00:20:57.875 "copy": false, 00:20:57.875 "nvme_iov_md": false 00:20:57.875 }, 00:20:57.875 "driver_specific": { 00:20:57.875 "lvol": { 00:20:57.875 "lvol_store_uuid": "8de3f431-add8-43da-957f-ad5c710e8b62", 00:20:57.875 "base_bdev": "nvme0n1", 00:20:57.875 "thin_provision": true, 00:20:57.875 "num_allocated_clusters": 0, 00:20:57.875 "snapshot": false, 00:20:57.875 "clone": false, 00:20:57.875 "esnap_clone": false 00:20:57.875 } 00:20:57.875 } 00:20:57.875 } 00:20:57.875 ]' 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:57.875 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:57.875 12:53:23 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:57.875 12:53:23 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:58.134 12:53:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:58.134 12:53:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size b0d1e2ed-925a-4230-922d-e834c715757c 00:20:58.134 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=b0d1e2ed-925a-4230-922d-e834c715757c 00:20:58.134 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:58.134 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:58.134 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:58.134 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b0d1e2ed-925a-4230-922d-e834c715757c 00:20:58.394 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:58.394 { 00:20:58.395 "name": "b0d1e2ed-925a-4230-922d-e834c715757c", 00:20:58.395 "aliases": [ 00:20:58.395 "lvs/nvme0n1p0" 00:20:58.395 ], 00:20:58.395 "product_name": "Logical Volume", 00:20:58.395 "block_size": 4096, 00:20:58.395 "num_blocks": 26476544, 00:20:58.395 "uuid": "b0d1e2ed-925a-4230-922d-e834c715757c", 00:20:58.395 "assigned_rate_limits": { 00:20:58.395 "rw_ios_per_sec": 0, 00:20:58.395 "rw_mbytes_per_sec": 0, 00:20:58.395 "r_mbytes_per_sec": 0, 00:20:58.395 "w_mbytes_per_sec": 0 00:20:58.395 }, 00:20:58.395 "claimed": false, 00:20:58.395 "zoned": false, 00:20:58.395 "supported_io_types": { 00:20:58.395 "read": true, 00:20:58.395 "write": true, 00:20:58.395 "unmap": true, 00:20:58.395 "flush": false, 00:20:58.395 "reset": true, 00:20:58.395 "nvme_admin": false, 00:20:58.395 "nvme_io": false, 00:20:58.395 "nvme_io_md": false, 00:20:58.395 "write_zeroes": true, 00:20:58.395 "zcopy": false, 00:20:58.395 "get_zone_info": false, 00:20:58.395 "zone_management": false, 00:20:58.395 "zone_append": false, 00:20:58.395 "compare": false, 00:20:58.395 "compare_and_write": false, 00:20:58.395 "abort": false, 00:20:58.395 "seek_hole": true, 00:20:58.395 "seek_data": true, 00:20:58.395 "copy": false, 00:20:58.395 "nvme_iov_md": false 00:20:58.395 }, 00:20:58.395 "driver_specific": { 00:20:58.395 "lvol": { 00:20:58.395 "lvol_store_uuid": "8de3f431-add8-43da-957f-ad5c710e8b62", 00:20:58.395 "base_bdev": "nvme0n1", 00:20:58.395 "thin_provision": true, 00:20:58.395 "num_allocated_clusters": 0, 00:20:58.395 "snapshot": false, 00:20:58.395 "clone": false, 00:20:58.395 "esnap_clone": false 00:20:58.395 } 00:20:58.395 } 00:20:58.395 } 00:20:58.395 ]' 00:20:58.395 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:58.395 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:58.395 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:58.395 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:58.395 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:58.395 12:53:23 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:58.395 12:53:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:58.395 12:53:23 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b0d1e2ed-925a-4230-922d-e834c715757c --l2p_dram_limit 10' 00:20:58.395 12:53:23 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:58.395 12:53:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:58.395 12:53:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:58.395 12:53:23 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:58.395 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:58.395 12:53:23 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b0d1e2ed-925a-4230-922d-e834c715757c --l2p_dram_limit 10 -c nvc0n1p0 00:20:58.657 [2024-11-20 12:53:24.022011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.022144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.657 [2024-11-20 12:53:24.022166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:58.657 [2024-11-20 12:53:24.022173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.022228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.022236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.657 [2024-11-20 12:53:24.022243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:58.657 [2024-11-20 12:53:24.022249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.022269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.657 [2024-11-20 12:53:24.022903] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.657 [2024-11-20 12:53:24.022924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.022931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.657 [2024-11-20 12:53:24.022939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:20:58.657 [2024-11-20 12:53:24.022945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.022996] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 935d31be-3433-439f-a336-7f65533c8f51 00:20:58.657 [2024-11-20 12:53:24.023943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.023972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:58.657 [2024-11-20 12:53:24.023980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:58.657 [2024-11-20 12:53:24.023988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.028521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.028549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.657 [2024-11-20 12:53:24.028559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.500 ms 00:20:58.657 [2024-11-20 12:53:24.028566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.028631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.028640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.657 [2024-11-20 12:53:24.028646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:58.657 [2024-11-20 12:53:24.028655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.028693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.028701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.657 [2024-11-20 12:53:24.028708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.657 [2024-11-20 12:53:24.028716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.028732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.657 [2024-11-20 12:53:24.031555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.031580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.657 [2024-11-20 12:53:24.031590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:20:58.657 [2024-11-20 12:53:24.031596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.031630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.031636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.657 [2024-11-20 12:53:24.031644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:58.657 [2024-11-20 12:53:24.031649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.031663] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:58.657 [2024-11-20 12:53:24.031776] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.657 [2024-11-20 12:53:24.031788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.657 [2024-11-20 12:53:24.031797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:58.657 [2024-11-20 12:53:24.031806] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.657 [2024-11-20 12:53:24.031813] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.657 [2024-11-20 12:53:24.031820] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:58.657 [2024-11-20 12:53:24.031826] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.657 [2024-11-20 12:53:24.031834] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.657 [2024-11-20 12:53:24.031840] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.657 [2024-11-20 12:53:24.031847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.031852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.657 [2024-11-20 12:53:24.031860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:20:58.657 [2024-11-20 12:53:24.031870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.031936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.657 [2024-11-20 12:53:24.031942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.657 [2024-11-20 12:53:24.031949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:58.657 [2024-11-20 12:53:24.031955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.657 [2024-11-20 12:53:24.032032] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.657 [2024-11-20 12:53:24.032039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.657 [2024-11-20 12:53:24.032047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.657 [2024-11-20 12:53:24.032053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.657 [2024-11-20 12:53:24.032060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.657 [2024-11-20 12:53:24.032065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.657 [2024-11-20 12:53:24.032071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:58.657 [2024-11-20 12:53:24.032076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.657 [2024-11-20 12:53:24.032083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:58.657 [2024-11-20 12:53:24.032088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.657 [2024-11-20 12:53:24.032094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.657 [2024-11-20 12:53:24.032099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:58.657 [2024-11-20 12:53:24.032106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.657 [2024-11-20 12:53:24.032111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.657 [2024-11-20 12:53:24.032118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:58.657 [2024-11-20 12:53:24.032123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.657 [2024-11-20 12:53:24.032130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.657 [2024-11-20 12:53:24.032135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:58.657 [2024-11-20 12:53:24.032142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.657 [2024-11-20 12:53:24.032147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.657 [2024-11-20 12:53:24.032154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:58.657 [2024-11-20 12:53:24.032159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.658 [2024-11-20 12:53:24.032165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.658 [2024-11-20 12:53:24.032171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:58.658 [2024-11-20 12:53:24.032177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.658 [2024-11-20 12:53:24.032182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.658 [2024-11-20 12:53:24.032188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:58.658 [2024-11-20 12:53:24.032193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.658 [2024-11-20 12:53:24.032200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.658 [2024-11-20 12:53:24.032205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:58.658 [2024-11-20 12:53:24.032211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.658 [2024-11-20 12:53:24.032216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.658 [2024-11-20 12:53:24.032225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:58.658 [2024-11-20 12:53:24.032229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.658 [2024-11-20 12:53:24.032236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.658 [2024-11-20 12:53:24.032240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:58.658 [2024-11-20 12:53:24.032246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.658 [2024-11-20 12:53:24.032251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.658 [2024-11-20 12:53:24.032258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:58.658 [2024-11-20 12:53:24.032263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.658 [2024-11-20 12:53:24.032269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.658 [2024-11-20 12:53:24.032274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:58.658 [2024-11-20 12:53:24.032280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.658 [2024-11-20 12:53:24.032284] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.658 [2024-11-20 12:53:24.032291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.658 [2024-11-20 12:53:24.032296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.658 [2024-11-20 12:53:24.032304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.658 [2024-11-20 12:53:24.032310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.658 [2024-11-20 12:53:24.032318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.658 [2024-11-20 12:53:24.032323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.658 [2024-11-20 12:53:24.032329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.658 [2024-11-20 12:53:24.032334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.658 [2024-11-20 12:53:24.032340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.658 [2024-11-20 12:53:24.032348] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.658 [2024-11-20 12:53:24.032356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.658 [2024-11-20 12:53:24.032366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:58.658 [2024-11-20 12:53:24.032373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:58.658 [2024-11-20 12:53:24.032379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:58.658 [2024-11-20 12:53:24.032385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:58.658 [2024-11-20 12:53:24.032391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:58.658 [2024-11-20 12:53:24.032398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:58.658 [2024-11-20 12:53:24.032403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:58.658 [2024-11-20 12:53:24.032409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:58.658 [2024-11-20 12:53:24.032415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:58.658 [2024-11-20 12:53:24.032422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:58.658 [2024-11-20 12:53:24.032428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:58.658 [2024-11-20 12:53:24.032434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:58.658 [2024-11-20 12:53:24.032440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:58.658 [2024-11-20 12:53:24.032447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:58.658 [2024-11-20 12:53:24.032453] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.658 [2024-11-20 12:53:24.032460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.658 [2024-11-20 12:53:24.032466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.658 [2024-11-20 12:53:24.032473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.658 [2024-11-20 12:53:24.032478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.658 [2024-11-20 12:53:24.032485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.658 [2024-11-20 12:53:24.032491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.658 [2024-11-20 12:53:24.032498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.658 [2024-11-20 12:53:24.032504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:20:58.658 [2024-11-20 12:53:24.032510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.658 [2024-11-20 12:53:24.032538] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:58.658 [2024-11-20 12:53:24.032548] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:02.869 [2024-11-20 12:53:27.661917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.662222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:02.869 [2024-11-20 12:53:27.662380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3629.355 ms 00:21:02.869 [2024-11-20 12:53:27.662407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.686968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.687146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.869 [2024-11-20 12:53:27.687204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.343 ms 00:21:02.869 [2024-11-20 12:53:27.687226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.687349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.687447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.869 [2024-11-20 12:53:27.687467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:02.869 [2024-11-20 12:53:27.687487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.713678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.713835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.869 [2024-11-20 12:53:27.713895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.142 ms 00:21:02.869 [2024-11-20 12:53:27.713906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.713932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.713946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.869 [2024-11-20 12:53:27.713953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.869 [2024-11-20 12:53:27.713962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.714363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.714381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.869 [2024-11-20 12:53:27.714389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:21:02.869 [2024-11-20 12:53:27.714397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.714478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.714487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.869 [2024-11-20 12:53:27.714496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:02.869 [2024-11-20 12:53:27.714505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.726899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.727008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.869 [2024-11-20 12:53:27.727052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.380 ms 00:21:02.869 [2024-11-20 12:53:27.727071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.736334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.869 [2024-11-20 12:53:27.738817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.738845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.869 [2024-11-20 12:53:27.738856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.675 ms 00:21:02.869 [2024-11-20 12:53:27.738861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.827513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.827558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:02.869 [2024-11-20 12:53:27.827575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.626 ms 00:21:02.869 [2024-11-20 12:53:27.827583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.827803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.827818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.869 [2024-11-20 12:53:27.827831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:02.869 [2024-11-20 12:53:27.827839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.851487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.851524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:02.869 [2024-11-20 12:53:27.851538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.600 ms 00:21:02.869 [2024-11-20 12:53:27.851546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.874759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.874795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:02.869 [2024-11-20 12:53:27.874809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.168 ms 00:21:02.869 [2024-11-20 12:53:27.874816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.875380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.875401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.869 [2024-11-20 12:53:27.875412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:21:02.869 [2024-11-20 12:53:27.875419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.951581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.951628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:02.869 [2024-11-20 12:53:27.951646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.122 ms 00:21:02.869 [2024-11-20 12:53:27.951655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:27.977041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:27.977189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:02.869 [2024-11-20 12:53:27.977211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.305 ms 00:21:02.869 [2024-11-20 12:53:27.977219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:28.001381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:28.001419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:02.869 [2024-11-20 12:53:28.001432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.122 ms 00:21:02.869 [2024-11-20 12:53:28.001440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.869 [2024-11-20 12:53:28.027068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.869 [2024-11-20 12:53:28.027107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.870 [2024-11-20 12:53:28.027120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.585 ms 00:21:02.870 [2024-11-20 12:53:28.027128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.870 [2024-11-20 12:53:28.027175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.870 [2024-11-20 12:53:28.027185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.870 [2024-11-20 12:53:28.027198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:02.870 [2024-11-20 12:53:28.027205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.870 [2024-11-20 12:53:28.027287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.870 [2024-11-20 12:53:28.027297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.870 [2024-11-20 12:53:28.027310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:02.870 [2024-11-20 12:53:28.027317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.870 [2024-11-20 12:53:28.028804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4006.385 ms, result 0 00:21:02.870 { 00:21:02.870 "name": "ftl0", 00:21:02.870 "uuid": "935d31be-3433-439f-a336-7f65533c8f51" 00:21:02.870 } 00:21:02.870 12:53:28 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:02.870 12:53:28 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:02.870 12:53:28 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:02.870 12:53:28 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:03.132 [2024-11-20 12:53:28.411776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.132 [2024-11-20 12:53:28.411827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:03.132 [2024-11-20 12:53:28.411840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:03.132 [2024-11-20 12:53:28.411856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.132 [2024-11-20 12:53:28.411879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:03.132 [2024-11-20 12:53:28.414503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.132 [2024-11-20 12:53:28.414532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:03.132 [2024-11-20 12:53:28.414544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:21:03.132 [2024-11-20 12:53:28.414552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.132 [2024-11-20 12:53:28.414825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.132 [2024-11-20 12:53:28.414840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:03.132 [2024-11-20 12:53:28.414852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:21:03.132 [2024-11-20 12:53:28.414859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.132 [2024-11-20 12:53:28.418101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.132 [2024-11-20 12:53:28.418122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:03.132 [2024-11-20 12:53:28.418133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:21:03.132 [2024-11-20 12:53:28.418141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.132 [2024-11-20 12:53:28.424313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.424337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:03.133 [2024-11-20 12:53:28.424351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:21:03.133 [2024-11-20 12:53:28.424358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.448652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.448686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:03.133 [2024-11-20 12:53:28.448699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.214 ms 00:21:03.133 [2024-11-20 12:53:28.448706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.464149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.464185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:03.133 [2024-11-20 12:53:28.464198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.388 ms 00:21:03.133 [2024-11-20 12:53:28.464206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.464356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.464366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:03.133 [2024-11-20 12:53:28.464377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:21:03.133 [2024-11-20 12:53:28.464384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.487867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.487899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:03.133 [2024-11-20 12:53:28.487911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.463 ms 00:21:03.133 [2024-11-20 12:53:28.487918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.511364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.511488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:03.133 [2024-11-20 12:53:28.511508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.410 ms 00:21:03.133 [2024-11-20 12:53:28.511515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.534413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.534528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:03.133 [2024-11-20 12:53:28.534546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.862 ms 00:21:03.133 [2024-11-20 12:53:28.534553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.557385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.133 [2024-11-20 12:53:28.557504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:03.133 [2024-11-20 12:53:28.557522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.760 ms 00:21:03.133 [2024-11-20 12:53:28.557530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.133 [2024-11-20 12:53:28.557562] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:03.133 [2024-11-20 12:53:28.557574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.557992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:03.133 [2024-11-20 12:53:28.558126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:03.134 [2024-11-20 12:53:28.558447] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:03.134 [2024-11-20 12:53:28.558458] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 935d31be-3433-439f-a336-7f65533c8f51 00:21:03.134 [2024-11-20 12:53:28.558466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:03.134 [2024-11-20 12:53:28.558476] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:03.134 [2024-11-20 12:53:28.558483] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:03.134 [2024-11-20 12:53:28.558494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:03.134 [2024-11-20 12:53:28.558501] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:03.134 [2024-11-20 12:53:28.558510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:03.134 [2024-11-20 12:53:28.558518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:03.134 [2024-11-20 12:53:28.558526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:03.134 [2024-11-20 12:53:28.558532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:03.134 [2024-11-20 12:53:28.558540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-11-20 12:53:28.558548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:03.134 [2024-11-20 12:53:28.558557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:21:03.134 [2024-11-20 12:53:28.558564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-11-20 12:53:28.570909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-11-20 12:53:28.570939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:03.134 [2024-11-20 12:53:28.570951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.309 ms 00:21:03.134 [2024-11-20 12:53:28.570959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-11-20 12:53:28.571320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.134 [2024-11-20 12:53:28.571338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:03.134 [2024-11-20 12:53:28.571348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:21:03.134 [2024-11-20 12:53:28.571357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-11-20 12:53:28.613981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.134 [2024-11-20 12:53:28.614018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.134 [2024-11-20 12:53:28.614031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.134 [2024-11-20 12:53:28.614039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-11-20 12:53:28.614098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.134 [2024-11-20 12:53:28.614105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.134 [2024-11-20 12:53:28.614115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.134 [2024-11-20 12:53:28.614124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-11-20 12:53:28.614199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.134 [2024-11-20 12:53:28.614209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.134 [2024-11-20 12:53:28.614219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.134 [2024-11-20 12:53:28.614226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.134 [2024-11-20 12:53:28.614246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.134 [2024-11-20 12:53:28.614254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.134 [2024-11-20 12:53:28.614264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.134 [2024-11-20 12:53:28.614271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.692181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.692226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.396 [2024-11-20 12:53:28.692239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.692247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.756548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.756593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.396 [2024-11-20 12:53:28.756606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.756617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.756709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.756718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.396 [2024-11-20 12:53:28.756729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.756761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.756815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.756825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.396 [2024-11-20 12:53:28.756835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.756843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.756936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.756946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.396 [2024-11-20 12:53:28.756955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.756963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.756996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.757006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:03.396 [2024-11-20 12:53:28.757015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.757023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.757061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.757073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.396 [2024-11-20 12:53:28.757083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.757090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.757134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.396 [2024-11-20 12:53:28.757144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.396 [2024-11-20 12:53:28.757155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.396 [2024-11-20 12:53:28.757162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.396 [2024-11-20 12:53:28.757294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.492 ms, result 0 00:21:03.396 true 00:21:03.396 12:53:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77260 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77260 ']' 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77260 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77260 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77260' 00:21:03.396 killing process with pid 77260 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77260 00:21:03.396 12:53:28 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77260 00:21:05.950 12:53:31 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:10.163 262144+0 records in 00:21:10.163 262144+0 records out 00:21:10.163 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.89848 s, 275 MB/s 00:21:10.163 12:53:35 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:11.603 12:53:37 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:11.603 [2024-11-20 12:53:37.094653] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:21:11.603 [2024-11-20 12:53:37.094872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77490 ] 00:21:11.864 [2024-11-20 12:53:37.242364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.864 [2024-11-20 12:53:37.316406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.125 [2024-11-20 12:53:37.521888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.125 [2024-11-20 12:53:37.521934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.389 [2024-11-20 12:53:37.676404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.676440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:12.389 [2024-11-20 12:53:37.676454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:12.389 [2024-11-20 12:53:37.676461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.676496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.676504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.389 [2024-11-20 12:53:37.676512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:12.389 [2024-11-20 12:53:37.676518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.676531] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:12.389 [2024-11-20 12:53:37.677062] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:12.389 [2024-11-20 12:53:37.677075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.677081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.389 [2024-11-20 12:53:37.677087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:21:12.389 [2024-11-20 12:53:37.677093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.678003] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:12.389 [2024-11-20 12:53:37.687591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.687813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:12.389 [2024-11-20 12:53:37.687827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.589 ms 00:21:12.389 [2024-11-20 12:53:37.687833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.687876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.687884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:12.389 [2024-11-20 12:53:37.687890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:12.389 [2024-11-20 12:53:37.687895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.692177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.692203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.389 [2024-11-20 12:53:37.692210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:21:12.389 [2024-11-20 12:53:37.692216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.692270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.692278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.389 [2024-11-20 12:53:37.692284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:12.389 [2024-11-20 12:53:37.692289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.692322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.692329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:12.389 [2024-11-20 12:53:37.692335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:12.389 [2024-11-20 12:53:37.692340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.692354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:12.389 [2024-11-20 12:53:37.695111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.695211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.389 [2024-11-20 12:53:37.695223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:21:12.389 [2024-11-20 12:53:37.695232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.695260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.695267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:12.389 [2024-11-20 12:53:37.695273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:12.389 [2024-11-20 12:53:37.695279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.695293] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:12.389 [2024-11-20 12:53:37.695306] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:12.389 [2024-11-20 12:53:37.695333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:12.389 [2024-11-20 12:53:37.695347] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:12.389 [2024-11-20 12:53:37.695427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:12.389 [2024-11-20 12:53:37.695435] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:12.389 [2024-11-20 12:53:37.695443] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:12.389 [2024-11-20 12:53:37.695450] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695457] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695463] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:12.389 [2024-11-20 12:53:37.695469] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:12.389 [2024-11-20 12:53:37.695474] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:12.389 [2024-11-20 12:53:37.695480] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:12.389 [2024-11-20 12:53:37.695487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.695493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:12.389 [2024-11-20 12:53:37.695499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:21:12.389 [2024-11-20 12:53:37.695505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.695569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.389 [2024-11-20 12:53:37.695575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:12.389 [2024-11-20 12:53:37.695581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:12.389 [2024-11-20 12:53:37.695586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.389 [2024-11-20 12:53:37.695677] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:12.389 [2024-11-20 12:53:37.695687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:12.389 [2024-11-20 12:53:37.695694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:12.389 [2024-11-20 12:53:37.695711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:12.389 [2024-11-20 12:53:37.695728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.389 [2024-11-20 12:53:37.695753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:12.389 [2024-11-20 12:53:37.695758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:12.389 [2024-11-20 12:53:37.695764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.389 [2024-11-20 12:53:37.695770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:12.389 [2024-11-20 12:53:37.695776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:12.389 [2024-11-20 12:53:37.695785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:12.389 [2024-11-20 12:53:37.695796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:12.389 [2024-11-20 12:53:37.695813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:12.389 [2024-11-20 12:53:37.695830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:12.389 [2024-11-20 12:53:37.695845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.389 [2024-11-20 12:53:37.695855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:12.389 [2024-11-20 12:53:37.695861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:12.389 [2024-11-20 12:53:37.695866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.390 [2024-11-20 12:53:37.695871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:12.390 [2024-11-20 12:53:37.695876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:12.390 [2024-11-20 12:53:37.695881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.390 [2024-11-20 12:53:37.695886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:12.390 [2024-11-20 12:53:37.695891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:12.390 [2024-11-20 12:53:37.695896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.390 [2024-11-20 12:53:37.695901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:12.390 [2024-11-20 12:53:37.695906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:12.390 [2024-11-20 12:53:37.695911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.390 [2024-11-20 12:53:37.695916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:12.390 [2024-11-20 12:53:37.695921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:12.390 [2024-11-20 12:53:37.695926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.390 [2024-11-20 12:53:37.695931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:12.390 [2024-11-20 12:53:37.695938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:12.390 [2024-11-20 12:53:37.695944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.390 [2024-11-20 12:53:37.695949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.390 [2024-11-20 12:53:37.695955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:12.390 [2024-11-20 12:53:37.695960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:12.390 [2024-11-20 12:53:37.695965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:12.390 [2024-11-20 12:53:37.695970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:12.390 [2024-11-20 12:53:37.695975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:12.390 [2024-11-20 12:53:37.695980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:12.390 [2024-11-20 12:53:37.695986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:12.390 [2024-11-20 12:53:37.695993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.390 [2024-11-20 12:53:37.696000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:12.390 [2024-11-20 12:53:37.696005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:12.390 [2024-11-20 12:53:37.696011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:12.390 [2024-11-20 12:53:37.696016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:12.390 [2024-11-20 12:53:37.696022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:12.390 [2024-11-20 12:53:37.696027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:12.390 [2024-11-20 12:53:37.696032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:12.390 [2024-11-20 12:53:37.696038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:12.390 [2024-11-20 12:53:37.696043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:12.390 [2024-11-20 12:53:37.696048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:12.390 [2024-11-20 12:53:37.696054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:12.390 [2024-11-20 12:53:37.696060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:12.390 [2024-11-20 12:53:37.696065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:12.390 [2024-11-20 12:53:37.696071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:12.390 [2024-11-20 12:53:37.696076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:12.390 [2024-11-20 12:53:37.696084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.390 [2024-11-20 12:53:37.696090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:12.390 [2024-11-20 12:53:37.696096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:12.390 [2024-11-20 12:53:37.696101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:12.390 [2024-11-20 12:53:37.696107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:12.390 [2024-11-20 12:53:37.696113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.696119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:12.390 [2024-11-20 12:53:37.696125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:21:12.390 [2024-11-20 12:53:37.696131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.716798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.716825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.390 [2024-11-20 12:53:37.716833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.635 ms 00:21:12.390 [2024-11-20 12:53:37.716839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.716902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.716908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.390 [2024-11-20 12:53:37.716914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:12.390 [2024-11-20 12:53:37.716920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.755769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.755882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.390 [2024-11-20 12:53:37.755896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.811 ms 00:21:12.390 [2024-11-20 12:53:37.755903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.755929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.755936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.390 [2024-11-20 12:53:37.755943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:12.390 [2024-11-20 12:53:37.755951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.756258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.756271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.390 [2024-11-20 12:53:37.756278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:21:12.390 [2024-11-20 12:53:37.756283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.756379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.756386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.390 [2024-11-20 12:53:37.756392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:12.390 [2024-11-20 12:53:37.756398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.766793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.766890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.390 [2024-11-20 12:53:37.766901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.377 ms 00:21:12.390 [2024-11-20 12:53:37.766910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.776659] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:12.390 [2024-11-20 12:53:37.776686] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.390 [2024-11-20 12:53:37.776696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.776702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.390 [2024-11-20 12:53:37.776708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.717 ms 00:21:12.390 [2024-11-20 12:53:37.776714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.795020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.795047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.390 [2024-11-20 12:53:37.795058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.265 ms 00:21:12.390 [2024-11-20 12:53:37.795065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.803756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.803785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.390 [2024-11-20 12:53:37.803793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.662 ms 00:21:12.390 [2024-11-20 12:53:37.803798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.812121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.812144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.390 [2024-11-20 12:53:37.812151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.298 ms 00:21:12.390 [2024-11-20 12:53:37.812156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.812598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.812620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.390 [2024-11-20 12:53:37.812627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:21:12.390 [2024-11-20 12:53:37.812633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.390 [2024-11-20 12:53:37.856016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.390 [2024-11-20 12:53:37.856051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.391 [2024-11-20 12:53:37.856061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.368 ms 00:21:12.391 [2024-11-20 12:53:37.856071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.863859] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:12.391 [2024-11-20 12:53:37.865746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.391 [2024-11-20 12:53:37.865769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.391 [2024-11-20 12:53:37.865778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.629 ms 00:21:12.391 [2024-11-20 12:53:37.865797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.865856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.391 [2024-11-20 12:53:37.865865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.391 [2024-11-20 12:53:37.865873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:12.391 [2024-11-20 12:53:37.865880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.865931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.391 [2024-11-20 12:53:37.865940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.391 [2024-11-20 12:53:37.865946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:12.391 [2024-11-20 12:53:37.865953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.865968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.391 [2024-11-20 12:53:37.865975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.391 [2024-11-20 12:53:37.865981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:12.391 [2024-11-20 12:53:37.865987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.866010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.391 [2024-11-20 12:53:37.866018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.391 [2024-11-20 12:53:37.866025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.391 [2024-11-20 12:53:37.866032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:12.391 [2024-11-20 12:53:37.866037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.883680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.391 [2024-11-20 12:53:37.883707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.391 [2024-11-20 12:53:37.883716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.631 ms 00:21:12.391 [2024-11-20 12:53:37.883722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.883789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.391 [2024-11-20 12:53:37.883798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.391 [2024-11-20 12:53:37.883804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:12.391 [2024-11-20 12:53:37.883810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.391 [2024-11-20 12:53:37.884819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 208.076 ms, result 0 00:21:13.787  [2024-11-20T12:53:40.252Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-20T12:53:41.197Z] Copying: 43/1024 [MB] (23 MBps) [2024-11-20T12:53:42.140Z] Copying: 66/1024 [MB] (22 MBps) [2024-11-20T12:53:43.084Z] Copying: 85/1024 [MB] (19 MBps) [2024-11-20T12:53:44.023Z] Copying: 100/1024 [MB] (15 MBps) [2024-11-20T12:53:44.967Z] Copying: 118/1024 [MB] (18 MBps) [2024-11-20T12:53:45.912Z] Copying: 139/1024 [MB] (20 MBps) [2024-11-20T12:53:47.301Z] Copying: 156/1024 [MB] (17 MBps) [2024-11-20T12:53:48.248Z] Copying: 177/1024 [MB] (20 MBps) [2024-11-20T12:53:49.193Z] Copying: 193/1024 [MB] (15 MBps) [2024-11-20T12:53:50.133Z] Copying: 214/1024 [MB] (21 MBps) [2024-11-20T12:53:51.077Z] Copying: 235/1024 [MB] (20 MBps) [2024-11-20T12:53:52.051Z] Copying: 252/1024 [MB] (17 MBps) [2024-11-20T12:53:53.003Z] Copying: 271/1024 [MB] (18 MBps) [2024-11-20T12:53:53.948Z] Copying: 292/1024 [MB] (20 MBps) [2024-11-20T12:53:55.335Z] Copying: 309/1024 [MB] (17 MBps) [2024-11-20T12:53:55.907Z] Copying: 328/1024 [MB] (18 MBps) [2024-11-20T12:53:57.295Z] Copying: 341/1024 [MB] (12 MBps) [2024-11-20T12:53:58.240Z] Copying: 354/1024 [MB] (13 MBps) [2024-11-20T12:53:59.180Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-20T12:54:00.119Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-20T12:54:01.065Z] Copying: 396/1024 [MB] (20 MBps) [2024-11-20T12:54:02.008Z] Copying: 410/1024 [MB] (13 MBps) [2024-11-20T12:54:02.957Z] Copying: 426/1024 [MB] (16 MBps) [2024-11-20T12:54:03.901Z] Copying: 442/1024 [MB] (16 MBps) [2024-11-20T12:54:05.286Z] Copying: 457/1024 [MB] (14 MBps) [2024-11-20T12:54:06.227Z] Copying: 482/1024 [MB] (24 MBps) [2024-11-20T12:54:07.173Z] Copying: 507/1024 [MB] (25 MBps) [2024-11-20T12:54:08.164Z] Copying: 523/1024 [MB] (15 MBps) [2024-11-20T12:54:09.109Z] Copying: 539/1024 [MB] (16 MBps) [2024-11-20T12:54:10.061Z] Copying: 558/1024 [MB] (19 MBps) [2024-11-20T12:54:11.005Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-20T12:54:11.983Z] Copying: 593/1024 [MB] (17 MBps) [2024-11-20T12:54:12.929Z] Copying: 611/1024 [MB] (17 MBps) [2024-11-20T12:54:14.318Z] Copying: 632/1024 [MB] (21 MBps) [2024-11-20T12:54:15.262Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-20T12:54:16.204Z] Copying: 661/1024 [MB] (18 MBps) [2024-11-20T12:54:17.147Z] Copying: 675/1024 [MB] (13 MBps) [2024-11-20T12:54:18.093Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-20T12:54:19.038Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-20T12:54:19.983Z] Copying: 707/1024 [MB] (10 MBps) [2024-11-20T12:54:20.923Z] Copying: 717/1024 [MB] (10 MBps) [2024-11-20T12:54:22.308Z] Copying: 728/1024 [MB] (10 MBps) [2024-11-20T12:54:23.253Z] Copying: 744/1024 [MB] (15 MBps) [2024-11-20T12:54:24.197Z] Copying: 757/1024 [MB] (13 MBps) [2024-11-20T12:54:25.141Z] Copying: 776/1024 [MB] (19 MBps) [2024-11-20T12:54:26.080Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-20T12:54:27.014Z] Copying: 824/1024 [MB] (36 MBps) [2024-11-20T12:54:27.949Z] Copying: 841/1024 [MB] (16 MBps) [2024-11-20T12:54:29.325Z] Copying: 859/1024 [MB] (18 MBps) [2024-11-20T12:54:30.261Z] Copying: 879/1024 [MB] (19 MBps) [2024-11-20T12:54:31.206Z] Copying: 900/1024 [MB] (21 MBps) [2024-11-20T12:54:32.147Z] Copying: 933/1024 [MB] (33 MBps) [2024-11-20T12:54:33.089Z] Copying: 953/1024 [MB] (19 MBps) [2024-11-20T12:54:34.034Z] Copying: 977/1024 [MB] (23 MBps) [2024-11-20T12:54:34.979Z] Copying: 998/1024 [MB] (20 MBps) [2024-11-20T12:54:35.241Z] Copying: 1018/1024 [MB] (19 MBps) [2024-11-20T12:54:35.241Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-20 12:54:35.209969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.722 [2024-11-20 12:54:35.210030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.722 [2024-11-20 12:54:35.210047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:09.722 [2024-11-20 12:54:35.210056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.722 [2024-11-20 12:54:35.210078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.722 [2024-11-20 12:54:35.213174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.722 [2024-11-20 12:54:35.213208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.722 [2024-11-20 12:54:35.213220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.080 ms 00:22:09.722 [2024-11-20 12:54:35.213229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.722 [2024-11-20 12:54:35.215326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.722 [2024-11-20 12:54:35.215371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.722 [2024-11-20 12:54:35.215383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:22:09.722 [2024-11-20 12:54:35.215392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.722 [2024-11-20 12:54:35.231994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.722 [2024-11-20 12:54:35.232042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.722 [2024-11-20 12:54:35.232054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.585 ms 00:22:09.722 [2024-11-20 12:54:35.232061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.238233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.238281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.985 [2024-11-20 12:54:35.238292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:22:09.985 [2024-11-20 12:54:35.238300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.264651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.264695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.985 [2024-11-20 12:54:35.264707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.294 ms 00:22:09.985 [2024-11-20 12:54:35.264716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.280051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.280243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.985 [2024-11-20 12:54:35.280266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.273 ms 00:22:09.985 [2024-11-20 12:54:35.280275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.280410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.280422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.985 [2024-11-20 12:54:35.280437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:09.985 [2024-11-20 12:54:35.280445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.305273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.305317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.985 [2024-11-20 12:54:35.305330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.812 ms 00:22:09.985 [2024-11-20 12:54:35.305336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.330222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.330265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.985 [2024-11-20 12:54:35.330289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.843 ms 00:22:09.985 [2024-11-20 12:54:35.330296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.354914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.354959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.985 [2024-11-20 12:54:35.354971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.575 ms 00:22:09.985 [2024-11-20 12:54:35.354978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.379568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.985 [2024-11-20 12:54:35.379610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.985 [2024-11-20 12:54:35.379620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.521 ms 00:22:09.985 [2024-11-20 12:54:35.379627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.985 [2024-11-20 12:54:35.379669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.985 [2024-11-20 12:54:35.379685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.985 [2024-11-20 12:54:35.379864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.379995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.986 [2024-11-20 12:54:35.380481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.987 [2024-11-20 12:54:35.380502] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.987 [2024-11-20 12:54:35.380516] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 935d31be-3433-439f-a336-7f65533c8f51 00:22:09.987 [2024-11-20 12:54:35.380524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:09.987 [2024-11-20 12:54:35.380534] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:09.987 [2024-11-20 12:54:35.380542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:09.987 [2024-11-20 12:54:35.380551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:09.987 [2024-11-20 12:54:35.380558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.987 [2024-11-20 12:54:35.380566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.987 [2024-11-20 12:54:35.380574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.987 [2024-11-20 12:54:35.380587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.987 [2024-11-20 12:54:35.380594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.987 [2024-11-20 12:54:35.380601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.987 [2024-11-20 12:54:35.380609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.987 [2024-11-20 12:54:35.380619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:22:09.987 [2024-11-20 12:54:35.380627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.987 [2024-11-20 12:54:35.394004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.987 [2024-11-20 12:54:35.394182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.987 [2024-11-20 12:54:35.394199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.357 ms 00:22:09.987 [2024-11-20 12:54:35.394208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.987 [2024-11-20 12:54:35.394608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.987 [2024-11-20 12:54:35.394619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.987 [2024-11-20 12:54:35.394628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:22:09.987 [2024-11-20 12:54:35.394636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.987 [2024-11-20 12:54:35.430959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.987 [2024-11-20 12:54:35.431006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.987 [2024-11-20 12:54:35.431017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.987 [2024-11-20 12:54:35.431025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.987 [2024-11-20 12:54:35.431087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.987 [2024-11-20 12:54:35.431097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.987 [2024-11-20 12:54:35.431105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.987 [2024-11-20 12:54:35.431114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.987 [2024-11-20 12:54:35.431183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.987 [2024-11-20 12:54:35.431194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.987 [2024-11-20 12:54:35.431202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.987 [2024-11-20 12:54:35.431209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.987 [2024-11-20 12:54:35.431224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.987 [2024-11-20 12:54:35.431232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.987 [2024-11-20 12:54:35.431240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.987 [2024-11-20 12:54:35.431248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.246 [2024-11-20 12:54:35.514779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.246 [2024-11-20 12:54:35.514838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.246 [2024-11-20 12:54:35.514852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.246 [2024-11-20 12:54:35.514861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.246 [2024-11-20 12:54:35.582903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.246 [2024-11-20 12:54:35.582947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.246 [2024-11-20 12:54:35.582959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.246 [2024-11-20 12:54:35.582967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.246 [2024-11-20 12:54:35.583038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.246 [2024-11-20 12:54:35.583049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.246 [2024-11-20 12:54:35.583057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.246 [2024-11-20 12:54:35.583065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.246 [2024-11-20 12:54:35.583098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.246 [2024-11-20 12:54:35.583108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.246 [2024-11-20 12:54:35.583117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.246 [2024-11-20 12:54:35.583124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.246 [2024-11-20 12:54:35.583210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.246 [2024-11-20 12:54:35.583222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.246 [2024-11-20 12:54:35.583231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.246 [2024-11-20 12:54:35.583238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.246 [2024-11-20 12:54:35.583266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.246 [2024-11-20 12:54:35.583275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.246 [2024-11-20 12:54:35.583283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.246 [2024-11-20 12:54:35.583291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.246 [2024-11-20 12:54:35.583326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.246 [2024-11-20 12:54:35.583335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.246 [2024-11-20 12:54:35.583346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.246 [2024-11-20 12:54:35.583353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.247 [2024-11-20 12:54:35.583395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.247 [2024-11-20 12:54:35.583405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.247 [2024-11-20 12:54:35.583413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.247 [2024-11-20 12:54:35.583421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.247 [2024-11-20 12:54:35.583537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.538 ms, result 0 00:22:11.209 00:22:11.209 00:22:11.209 12:54:36 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:11.209 [2024-11-20 12:54:36.418775] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:22:11.209 [2024-11-20 12:54:36.418894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78105 ] 00:22:11.209 [2024-11-20 12:54:36.575644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.209 [2024-11-20 12:54:36.670167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.492 [2024-11-20 12:54:36.920308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.492 [2024-11-20 12:54:36.920374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.755 [2024-11-20 12:54:37.079361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.079421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:11.755 [2024-11-20 12:54:37.079441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:11.755 [2024-11-20 12:54:37.079450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.079501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.079511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.755 [2024-11-20 12:54:37.079523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:11.755 [2024-11-20 12:54:37.079530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.079551] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:11.755 [2024-11-20 12:54:37.080292] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:11.755 [2024-11-20 12:54:37.080318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.080326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.755 [2024-11-20 12:54:37.080336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:22:11.755 [2024-11-20 12:54:37.080343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.082034] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:11.755 [2024-11-20 12:54:37.095793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.095840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:11.755 [2024-11-20 12:54:37.095854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.761 ms 00:22:11.755 [2024-11-20 12:54:37.095862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.095936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.095946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:11.755 [2024-11-20 12:54:37.095954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:11.755 [2024-11-20 12:54:37.095962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.103795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.103834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.755 [2024-11-20 12:54:37.103845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.757 ms 00:22:11.755 [2024-11-20 12:54:37.103853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.103935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.103945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.755 [2024-11-20 12:54:37.103953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:11.755 [2024-11-20 12:54:37.103961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.104000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.755 [2024-11-20 12:54:37.104011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:11.755 [2024-11-20 12:54:37.104020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:11.755 [2024-11-20 12:54:37.104028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.755 [2024-11-20 12:54:37.104049] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.756 [2024-11-20 12:54:37.108128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.756 [2024-11-20 12:54:37.108165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.756 [2024-11-20 12:54:37.108176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:22:11.756 [2024-11-20 12:54:37.108187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.756 [2024-11-20 12:54:37.108221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.756 [2024-11-20 12:54:37.108229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:11.756 [2024-11-20 12:54:37.108238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:11.756 [2024-11-20 12:54:37.108247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.756 [2024-11-20 12:54:37.108297] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:11.756 [2024-11-20 12:54:37.108320] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:11.756 [2024-11-20 12:54:37.108357] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:11.756 [2024-11-20 12:54:37.108376] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:11.756 [2024-11-20 12:54:37.108482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:11.756 [2024-11-20 12:54:37.108494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:11.756 [2024-11-20 12:54:37.108505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:11.756 [2024-11-20 12:54:37.108516] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:11.756 [2024-11-20 12:54:37.108525] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:11.756 [2024-11-20 12:54:37.108534] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:11.756 [2024-11-20 12:54:37.108541] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:11.756 [2024-11-20 12:54:37.108550] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:11.756 [2024-11-20 12:54:37.108557] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:11.756 [2024-11-20 12:54:37.108568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.756 [2024-11-20 12:54:37.108576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:11.756 [2024-11-20 12:54:37.108584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:22:11.756 [2024-11-20 12:54:37.108592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.756 [2024-11-20 12:54:37.108678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.756 [2024-11-20 12:54:37.108686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:11.756 [2024-11-20 12:54:37.108696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:11.756 [2024-11-20 12:54:37.108703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.756 [2024-11-20 12:54:37.108833] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:11.756 [2024-11-20 12:54:37.108848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:11.756 [2024-11-20 12:54:37.108856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.756 [2024-11-20 12:54:37.108864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.756 [2024-11-20 12:54:37.108873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:11.756 [2024-11-20 12:54:37.108880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:11.756 [2024-11-20 12:54:37.108888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:11.756 [2024-11-20 12:54:37.108896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:11.756 [2024-11-20 12:54:37.108903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:11.756 [2024-11-20 12:54:37.108911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.756 [2024-11-20 12:54:37.108919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:11.756 [2024-11-20 12:54:37.108925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:11.756 [2024-11-20 12:54:37.108932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.756 [2024-11-20 12:54:37.108939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:11.756 [2024-11-20 12:54:37.108946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:11.756 [2024-11-20 12:54:37.108959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.756 [2024-11-20 12:54:37.108966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:11.756 [2024-11-20 12:54:37.108974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:11.756 [2024-11-20 12:54:37.108981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.756 [2024-11-20 12:54:37.108988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:11.756 [2024-11-20 12:54:37.108995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.756 [2024-11-20 12:54:37.109009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:11.756 [2024-11-20 12:54:37.109016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.756 [2024-11-20 12:54:37.109029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:11.756 [2024-11-20 12:54:37.109036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.756 [2024-11-20 12:54:37.109049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:11.756 [2024-11-20 12:54:37.109057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.756 [2024-11-20 12:54:37.109071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:11.756 [2024-11-20 12:54:37.109085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.756 [2024-11-20 12:54:37.109099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:11.756 [2024-11-20 12:54:37.109106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:11.756 [2024-11-20 12:54:37.109113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.756 [2024-11-20 12:54:37.109120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:11.756 [2024-11-20 12:54:37.109127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:11.756 [2024-11-20 12:54:37.109133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:11.756 [2024-11-20 12:54:37.109146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:11.756 [2024-11-20 12:54:37.109153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109160] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:11.756 [2024-11-20 12:54:37.109168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:11.756 [2024-11-20 12:54:37.109176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.756 [2024-11-20 12:54:37.109183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.756 [2024-11-20 12:54:37.109191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:11.756 [2024-11-20 12:54:37.109198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:11.756 [2024-11-20 12:54:37.109207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:11.756 [2024-11-20 12:54:37.109214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:11.756 [2024-11-20 12:54:37.109221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:11.756 [2024-11-20 12:54:37.109227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:11.756 [2024-11-20 12:54:37.109236] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:11.756 [2024-11-20 12:54:37.109246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.756 [2024-11-20 12:54:37.109256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:11.756 [2024-11-20 12:54:37.109263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:11.757 [2024-11-20 12:54:37.109271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:11.757 [2024-11-20 12:54:37.109278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:11.757 [2024-11-20 12:54:37.109285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:11.757 [2024-11-20 12:54:37.109293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:11.757 [2024-11-20 12:54:37.109300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:11.757 [2024-11-20 12:54:37.109308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:11.757 [2024-11-20 12:54:37.109315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:11.757 [2024-11-20 12:54:37.109322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:11.757 [2024-11-20 12:54:37.109329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:11.757 [2024-11-20 12:54:37.109337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:11.757 [2024-11-20 12:54:37.109344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:11.757 [2024-11-20 12:54:37.109352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:11.757 [2024-11-20 12:54:37.109358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:11.757 [2024-11-20 12:54:37.109370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.757 [2024-11-20 12:54:37.109378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.757 [2024-11-20 12:54:37.109385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:11.757 [2024-11-20 12:54:37.109393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:11.757 [2024-11-20 12:54:37.109401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:11.757 [2024-11-20 12:54:37.109409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.109416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:11.757 [2024-11-20 12:54:37.109424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:22:11.757 [2024-11-20 12:54:37.109432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.140721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.140786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.757 [2024-11-20 12:54:37.140797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.247 ms 00:22:11.757 [2024-11-20 12:54:37.140806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.140900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.140909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:11.757 [2024-11-20 12:54:37.140919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:11.757 [2024-11-20 12:54:37.140926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.188231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.188285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.757 [2024-11-20 12:54:37.188298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.248 ms 00:22:11.757 [2024-11-20 12:54:37.188307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.188356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.188366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.757 [2024-11-20 12:54:37.188375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:11.757 [2024-11-20 12:54:37.188387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.189012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.189044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.757 [2024-11-20 12:54:37.189055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:22:11.757 [2024-11-20 12:54:37.189063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.189231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.189241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.757 [2024-11-20 12:54:37.189250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:22:11.757 [2024-11-20 12:54:37.189263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.204808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.204850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.757 [2024-11-20 12:54:37.204865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.525 ms 00:22:11.757 [2024-11-20 12:54:37.204874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.218587] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:11.757 [2024-11-20 12:54:37.218636] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:11.757 [2024-11-20 12:54:37.218650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.218659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:11.757 [2024-11-20 12:54:37.218669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.668 ms 00:22:11.757 [2024-11-20 12:54:37.218677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.244213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.244268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:11.757 [2024-11-20 12:54:37.244280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.485 ms 00:22:11.757 [2024-11-20 12:54:37.244288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.256750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.256795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:11.757 [2024-11-20 12:54:37.256807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.409 ms 00:22:11.757 [2024-11-20 12:54:37.256814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.269044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.269087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:11.757 [2024-11-20 12:54:37.269099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.184 ms 00:22:11.757 [2024-11-20 12:54:37.269106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.757 [2024-11-20 12:54:37.269783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.757 [2024-11-20 12:54:37.269810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:11.757 [2024-11-20 12:54:37.269820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:22:11.757 [2024-11-20 12:54:37.269831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.019 [2024-11-20 12:54:37.334235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.019 [2024-11-20 12:54:37.334474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:12.019 [2024-11-20 12:54:37.334508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.382 ms 00:22:12.019 [2024-11-20 12:54:37.334517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.019 [2024-11-20 12:54:37.353951] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:12.019 [2024-11-20 12:54:37.357455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.019 [2024-11-20 12:54:37.357647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:12.019 [2024-11-20 12:54:37.357668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.528 ms 00:22:12.019 [2024-11-20 12:54:37.357679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.020 [2024-11-20 12:54:37.357815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.020 [2024-11-20 12:54:37.357830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:12.020 [2024-11-20 12:54:37.357841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:12.020 [2024-11-20 12:54:37.357853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.020 [2024-11-20 12:54:37.357927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.020 [2024-11-20 12:54:37.357938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:12.020 [2024-11-20 12:54:37.357949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:12.020 [2024-11-20 12:54:37.357957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.020 [2024-11-20 12:54:37.357981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.020 [2024-11-20 12:54:37.357991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:12.020 [2024-11-20 12:54:37.358000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:12.020 [2024-11-20 12:54:37.358008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.020 [2024-11-20 12:54:37.358041] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:12.020 [2024-11-20 12:54:37.358055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.020 [2024-11-20 12:54:37.358063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:12.020 [2024-11-20 12:54:37.358072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:12.020 [2024-11-20 12:54:37.358081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.020 [2024-11-20 12:54:37.384349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.020 [2024-11-20 12:54:37.384526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:12.020 [2024-11-20 12:54:37.384547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.245 ms 00:22:12.020 [2024-11-20 12:54:37.384563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.020 [2024-11-20 12:54:37.384645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.020 [2024-11-20 12:54:37.384655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:12.020 [2024-11-20 12:54:37.384664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:12.020 [2024-11-20 12:54:37.384672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.020 [2024-11-20 12:54:37.385981] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.114 ms, result 0 00:22:13.400  [2024-11-20T12:54:39.852Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-20T12:54:40.786Z] Copying: 45/1024 [MB] (25 MBps) [2024-11-20T12:54:41.731Z] Copying: 74/1024 [MB] (28 MBps) [2024-11-20T12:54:42.676Z] Copying: 97/1024 [MB] (23 MBps) [2024-11-20T12:54:43.620Z] Copying: 118/1024 [MB] (21 MBps) [2024-11-20T12:54:45.008Z] Copying: 135/1024 [MB] (16 MBps) [2024-11-20T12:54:45.578Z] Copying: 153/1024 [MB] (18 MBps) [2024-11-20T12:54:46.960Z] Copying: 166/1024 [MB] (12 MBps) [2024-11-20T12:54:47.902Z] Copying: 178/1024 [MB] (12 MBps) [2024-11-20T12:54:48.837Z] Copying: 194/1024 [MB] (16 MBps) [2024-11-20T12:54:49.770Z] Copying: 218/1024 [MB] (23 MBps) [2024-11-20T12:54:50.706Z] Copying: 240/1024 [MB] (22 MBps) [2024-11-20T12:54:51.667Z] Copying: 256/1024 [MB] (15 MBps) [2024-11-20T12:54:52.602Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-20T12:54:53.980Z] Copying: 279/1024 [MB] (12 MBps) [2024-11-20T12:54:54.915Z] Copying: 292/1024 [MB] (12 MBps) [2024-11-20T12:54:55.850Z] Copying: 304/1024 [MB] (12 MBps) [2024-11-20T12:54:56.784Z] Copying: 316/1024 [MB] (12 MBps) [2024-11-20T12:54:57.727Z] Copying: 329/1024 [MB] (12 MBps) [2024-11-20T12:54:58.672Z] Copying: 350/1024 [MB] (21 MBps) [2024-11-20T12:54:59.618Z] Copying: 361/1024 [MB] (10 MBps) [2024-11-20T12:55:01.004Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-20T12:55:01.577Z] Copying: 386/1024 [MB] (14 MBps) [2024-11-20T12:55:02.965Z] Copying: 403/1024 [MB] (16 MBps) [2024-11-20T12:55:03.909Z] Copying: 420/1024 [MB] (16 MBps) [2024-11-20T12:55:04.853Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-20T12:55:05.797Z] Copying: 443/1024 [MB] (12 MBps) [2024-11-20T12:55:06.741Z] Copying: 457/1024 [MB] (14 MBps) [2024-11-20T12:55:07.681Z] Copying: 474/1024 [MB] (16 MBps) [2024-11-20T12:55:08.644Z] Copying: 489/1024 [MB] (14 MBps) [2024-11-20T12:55:09.589Z] Copying: 503/1024 [MB] (13 MBps) [2024-11-20T12:55:10.977Z] Copying: 523/1024 [MB] (20 MBps) [2024-11-20T12:55:11.920Z] Copying: 540/1024 [MB] (17 MBps) [2024-11-20T12:55:12.860Z] Copying: 557/1024 [MB] (17 MBps) [2024-11-20T12:55:13.805Z] Copying: 575/1024 [MB] (17 MBps) [2024-11-20T12:55:14.747Z] Copying: 596/1024 [MB] (20 MBps) [2024-11-20T12:55:15.692Z] Copying: 608/1024 [MB] (11 MBps) [2024-11-20T12:55:16.636Z] Copying: 623/1024 [MB] (15 MBps) [2024-11-20T12:55:17.578Z] Copying: 635/1024 [MB] (12 MBps) [2024-11-20T12:55:18.957Z] Copying: 646/1024 [MB] (10 MBps) [2024-11-20T12:55:19.899Z] Copying: 666/1024 [MB] (20 MBps) [2024-11-20T12:55:20.843Z] Copying: 680/1024 [MB] (14 MBps) [2024-11-20T12:55:21.784Z] Copying: 695/1024 [MB] (15 MBps) [2024-11-20T12:55:22.727Z] Copying: 709/1024 [MB] (14 MBps) [2024-11-20T12:55:23.670Z] Copying: 725/1024 [MB] (15 MBps) [2024-11-20T12:55:24.620Z] Copying: 742/1024 [MB] (17 MBps) [2024-11-20T12:55:25.605Z] Copying: 759/1024 [MB] (16 MBps) [2024-11-20T12:55:26.981Z] Copying: 781/1024 [MB] (21 MBps) [2024-11-20T12:55:27.923Z] Copying: 802/1024 [MB] (21 MBps) [2024-11-20T12:55:28.866Z] Copying: 819/1024 [MB] (17 MBps) [2024-11-20T12:55:29.807Z] Copying: 839/1024 [MB] (19 MBps) [2024-11-20T12:55:30.745Z] Copying: 861/1024 [MB] (21 MBps) [2024-11-20T12:55:31.688Z] Copying: 874/1024 [MB] (13 MBps) [2024-11-20T12:55:32.629Z] Copying: 891/1024 [MB] (16 MBps) [2024-11-20T12:55:34.006Z] Copying: 901/1024 [MB] (10 MBps) [2024-11-20T12:55:34.572Z] Copying: 913/1024 [MB] (11 MBps) [2024-11-20T12:55:35.950Z] Copying: 925/1024 [MB] (12 MBps) [2024-11-20T12:55:36.890Z] Copying: 945/1024 [MB] (20 MBps) [2024-11-20T12:55:37.833Z] Copying: 962/1024 [MB] (16 MBps) [2024-11-20T12:55:38.776Z] Copying: 981/1024 [MB] (18 MBps) [2024-11-20T12:55:39.707Z] Copying: 1000/1024 [MB] (19 MBps) [2024-11-20T12:55:39.985Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-20 12:55:39.845675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.466 [2024-11-20 12:55:39.845753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:14.466 [2024-11-20 12:55:39.845767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:14.466 [2024-11-20 12:55:39.845775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.466 [2024-11-20 12:55:39.845797] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.466 [2024-11-20 12:55:39.848848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.466 [2024-11-20 12:55:39.848880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:14.466 [2024-11-20 12:55:39.848895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:23:14.466 [2024-11-20 12:55:39.848903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.466 [2024-11-20 12:55:39.849121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.849131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:14.467 [2024-11-20 12:55:39.849139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:23:14.467 [2024-11-20 12:55:39.849147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.852591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.852613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:14.467 [2024-11-20 12:55:39.852623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:23:14.467 [2024-11-20 12:55:39.852631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.859362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.859390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:14.467 [2024-11-20 12:55:39.859399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.711 ms 00:23:14.467 [2024-11-20 12:55:39.859406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.884213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.884246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:14.467 [2024-11-20 12:55:39.884257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.750 ms 00:23:14.467 [2024-11-20 12:55:39.884264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.898160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.898293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:14.467 [2024-11-20 12:55:39.898311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.865 ms 00:23:14.467 [2024-11-20 12:55:39.898319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.898691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.898734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:14.467 [2024-11-20 12:55:39.898772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:14.467 [2024-11-20 12:55:39.898781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.922446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.922477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:14.467 [2024-11-20 12:55:39.922488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.651 ms 00:23:14.467 [2024-11-20 12:55:39.922495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.945304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.945436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:14.467 [2024-11-20 12:55:39.945452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.779 ms 00:23:14.467 [2024-11-20 12:55:39.945459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.467 [2024-11-20 12:55:39.967872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.467 [2024-11-20 12:55:39.967990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:14.467 [2024-11-20 12:55:39.968005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.379 ms 00:23:14.467 [2024-11-20 12:55:39.968012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.733 [2024-11-20 12:55:39.990700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.733 [2024-11-20 12:55:39.990728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:14.733 [2024-11-20 12:55:39.990754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.639 ms 00:23:14.733 [2024-11-20 12:55:39.990762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.733 [2024-11-20 12:55:39.990790] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:14.733 [2024-11-20 12:55:39.990803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:14.733 [2024-11-20 12:55:39.990927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.990994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:14.734 [2024-11-20 12:55:39.991526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:14.735 [2024-11-20 12:55:39.991533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:14.735 [2024-11-20 12:55:39.991541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:14.735 [2024-11-20 12:55:39.991548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:14.735 [2024-11-20 12:55:39.991555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:14.735 [2024-11-20 12:55:39.991562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:14.735 [2024-11-20 12:55:39.991577] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:14.735 [2024-11-20 12:55:39.991587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 935d31be-3433-439f-a336-7f65533c8f51 00:23:14.735 [2024-11-20 12:55:39.991595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:14.735 [2024-11-20 12:55:39.991602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:14.735 [2024-11-20 12:55:39.991609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:14.735 [2024-11-20 12:55:39.991616] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:14.735 [2024-11-20 12:55:39.991624] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:14.735 [2024-11-20 12:55:39.991631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:14.735 [2024-11-20 12:55:39.991643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:14.735 [2024-11-20 12:55:39.991650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:14.735 [2024-11-20 12:55:39.991656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:14.735 [2024-11-20 12:55:39.991663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.735 [2024-11-20 12:55:39.991671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:14.735 [2024-11-20 12:55:39.991679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:23:14.735 [2024-11-20 12:55:39.991686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.003686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.735 [2024-11-20 12:55:40.003716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:14.735 [2024-11-20 12:55:40.003727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.984 ms 00:23:14.735 [2024-11-20 12:55:40.003736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.004093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.735 [2024-11-20 12:55:40.004101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:14.735 [2024-11-20 12:55:40.004109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:23:14.735 [2024-11-20 12:55:40.004121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.036692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.036727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.735 [2024-11-20 12:55:40.036749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.036758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.036809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.036817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.735 [2024-11-20 12:55:40.036825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.036835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.036890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.036899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.735 [2024-11-20 12:55:40.036907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.036914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.036929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.036936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.735 [2024-11-20 12:55:40.036944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.036951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.113765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.113975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.735 [2024-11-20 12:55:40.113994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.114003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.180084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.735 [2024-11-20 12:55:40.180096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.180105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.180208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.735 [2024-11-20 12:55:40.180217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.180226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.180273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.735 [2024-11-20 12:55:40.180282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.180289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.180395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.735 [2024-11-20 12:55:40.180403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.180411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.180451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:14.735 [2024-11-20 12:55:40.180459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.180467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.180520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.735 [2024-11-20 12:55:40.180529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.180537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.735 [2024-11-20 12:55:40.180595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.735 [2024-11-20 12:55:40.180604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.735 [2024-11-20 12:55:40.180611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.735 [2024-11-20 12:55:40.180782] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.033 ms, result 0 00:23:15.680 00:23:15.680 00:23:15.680 12:55:40 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:18.220 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:18.220 12:55:43 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:18.220 [2024-11-20 12:55:43.198614] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:23:18.220 [2024-11-20 12:55:43.198855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78790 ] 00:23:18.220 [2024-11-20 12:55:43.358642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.220 [2024-11-20 12:55:43.451543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.220 [2024-11-20 12:55:43.702327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.220 [2024-11-20 12:55:43.702383] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.481 [2024-11-20 12:55:43.859229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.859272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.481 [2024-11-20 12:55:43.859289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.481 [2024-11-20 12:55:43.859297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.859342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.859353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.481 [2024-11-20 12:55:43.859362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:18.481 [2024-11-20 12:55:43.859370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.859385] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.481 [2024-11-20 12:55:43.860125] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.481 [2024-11-20 12:55:43.860142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.860150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.481 [2024-11-20 12:55:43.860158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:23:18.481 [2024-11-20 12:55:43.860165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.861208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:18.481 [2024-11-20 12:55:43.873920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.874054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:18.481 [2024-11-20 12:55:43.874073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.714 ms 00:23:18.481 [2024-11-20 12:55:43.874080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.874376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.874402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:18.481 [2024-11-20 12:55:43.874413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:18.481 [2024-11-20 12:55:43.874421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.879238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.879271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.481 [2024-11-20 12:55:43.879280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:23:18.481 [2024-11-20 12:55:43.879288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.879357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.879365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.481 [2024-11-20 12:55:43.879373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:18.481 [2024-11-20 12:55:43.879380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.879414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.879423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.481 [2024-11-20 12:55:43.879430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:18.481 [2024-11-20 12:55:43.879437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.879459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:18.481 [2024-11-20 12:55:43.882780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.882805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.481 [2024-11-20 12:55:43.882814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:23:18.481 [2024-11-20 12:55:43.882824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.882851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.481 [2024-11-20 12:55:43.882858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.481 [2024-11-20 12:55:43.882866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.481 [2024-11-20 12:55:43.882873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.481 [2024-11-20 12:55:43.882891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:18.481 [2024-11-20 12:55:43.882909] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:18.481 [2024-11-20 12:55:43.882942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:18.481 [2024-11-20 12:55:43.882959] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:18.481 [2024-11-20 12:55:43.883061] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:18.481 [2024-11-20 12:55:43.883071] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.481 [2024-11-20 12:55:43.883081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:18.481 [2024-11-20 12:55:43.883092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883100] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883108] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:18.482 [2024-11-20 12:55:43.883116] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.482 [2024-11-20 12:55:43.883123] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:18.482 [2024-11-20 12:55:43.883130] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:18.482 [2024-11-20 12:55:43.883139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.482 [2024-11-20 12:55:43.883146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.482 [2024-11-20 12:55:43.883154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:23:18.482 [2024-11-20 12:55:43.883160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.482 [2024-11-20 12:55:43.883242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.482 [2024-11-20 12:55:43.883250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.482 [2024-11-20 12:55:43.883257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:18.482 [2024-11-20 12:55:43.883264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.482 [2024-11-20 12:55:43.883364] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.482 [2024-11-20 12:55:43.883375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.482 [2024-11-20 12:55:43.883383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.482 [2024-11-20 12:55:43.883404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.482 [2024-11-20 12:55:43.883423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.482 [2024-11-20 12:55:43.883436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.482 [2024-11-20 12:55:43.883442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:18.482 [2024-11-20 12:55:43.883448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.482 [2024-11-20 12:55:43.883456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.482 [2024-11-20 12:55:43.883462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:18.482 [2024-11-20 12:55:43.883473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.482 [2024-11-20 12:55:43.883486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.482 [2024-11-20 12:55:43.883506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.482 [2024-11-20 12:55:43.883525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.482 [2024-11-20 12:55:43.883544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.482 [2024-11-20 12:55:43.883562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.482 [2024-11-20 12:55:43.883582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.482 [2024-11-20 12:55:43.883594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.482 [2024-11-20 12:55:43.883600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:18.482 [2024-11-20 12:55:43.883607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.482 [2024-11-20 12:55:43.883613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:18.482 [2024-11-20 12:55:43.883619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:18.482 [2024-11-20 12:55:43.883625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:18.482 [2024-11-20 12:55:43.883638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:18.482 [2024-11-20 12:55:43.883644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883650] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.482 [2024-11-20 12:55:43.883657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.482 [2024-11-20 12:55:43.883666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.482 [2024-11-20 12:55:43.883680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.482 [2024-11-20 12:55:43.883687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.482 [2024-11-20 12:55:43.883693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.482 [2024-11-20 12:55:43.883700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.482 [2024-11-20 12:55:43.883706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.482 [2024-11-20 12:55:43.883713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.482 [2024-11-20 12:55:43.883721] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.482 [2024-11-20 12:55:43.883729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.482 [2024-11-20 12:55:43.883755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:18.482 [2024-11-20 12:55:43.883763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:18.482 [2024-11-20 12:55:43.883771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:18.482 [2024-11-20 12:55:43.883777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:18.482 [2024-11-20 12:55:43.883785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:18.482 [2024-11-20 12:55:43.883792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:18.482 [2024-11-20 12:55:43.883799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:18.482 [2024-11-20 12:55:43.883806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:18.482 [2024-11-20 12:55:43.883813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:18.482 [2024-11-20 12:55:43.883820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:18.482 [2024-11-20 12:55:43.883827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:18.482 [2024-11-20 12:55:43.883834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:18.482 [2024-11-20 12:55:43.883841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:18.482 [2024-11-20 12:55:43.883848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:18.482 [2024-11-20 12:55:43.883855] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.482 [2024-11-20 12:55:43.883865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.482 [2024-11-20 12:55:43.883882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.482 [2024-11-20 12:55:43.883889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.482 [2024-11-20 12:55:43.883896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.483 [2024-11-20 12:55:43.883903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.483 [2024-11-20 12:55:43.883910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.883917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.483 [2024-11-20 12:55:43.883925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:23:18.483 [2024-11-20 12:55:43.883933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.909495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.909643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.483 [2024-11-20 12:55:43.909659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.509 ms 00:23:18.483 [2024-11-20 12:55:43.909666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.909777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.909787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:18.483 [2024-11-20 12:55:43.909794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:23:18.483 [2024-11-20 12:55:43.909801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.952031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.952067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.483 [2024-11-20 12:55:43.952079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.184 ms 00:23:18.483 [2024-11-20 12:55:43.952087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.952122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.952131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.483 [2024-11-20 12:55:43.952139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:18.483 [2024-11-20 12:55:43.952149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.952498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.952522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.483 [2024-11-20 12:55:43.952530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:23:18.483 [2024-11-20 12:55:43.952537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.952658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.952671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.483 [2024-11-20 12:55:43.952680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:18.483 [2024-11-20 12:55:43.952689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.965553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.965583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.483 [2024-11-20 12:55:43.965595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.845 ms 00:23:18.483 [2024-11-20 12:55:43.965603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.483 [2024-11-20 12:55:43.978310] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:18.483 [2024-11-20 12:55:43.978343] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:18.483 [2024-11-20 12:55:43.978354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.483 [2024-11-20 12:55:43.978361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:18.483 [2024-11-20 12:55:43.978369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.666 ms 00:23:18.483 [2024-11-20 12:55:43.978376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.002399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.002435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:18.742 [2024-11-20 12:55:44.002445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.989 ms 00:23:18.742 [2024-11-20 12:55:44.002452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.014054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.014084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:18.742 [2024-11-20 12:55:44.014093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.568 ms 00:23:18.742 [2024-11-20 12:55:44.014100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.025369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.025499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:18.742 [2024-11-20 12:55:44.025514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.239 ms 00:23:18.742 [2024-11-20 12:55:44.025521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.026144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.026163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:18.742 [2024-11-20 12:55:44.026172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:23:18.742 [2024-11-20 12:55:44.026181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.080949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.080994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:18.742 [2024-11-20 12:55:44.081010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.751 ms 00:23:18.742 [2024-11-20 12:55:44.081017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.091327] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:18.742 [2024-11-20 12:55:44.093511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.093540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:18.742 [2024-11-20 12:55:44.093550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 00:23:18.742 [2024-11-20 12:55:44.093558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.093637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.093647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:18.742 [2024-11-20 12:55:44.093656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.742 [2024-11-20 12:55:44.093665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.093726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.093735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:18.742 [2024-11-20 12:55:44.093761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:18.742 [2024-11-20 12:55:44.093768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.093786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.093793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:18.742 [2024-11-20 12:55:44.093801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.742 [2024-11-20 12:55:44.093808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.093837] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:18.742 [2024-11-20 12:55:44.093848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.093855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:18.742 [2024-11-20 12:55:44.093862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:18.742 [2024-11-20 12:55:44.093869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.116758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.116789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:18.742 [2024-11-20 12:55:44.116801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:23:18.742 [2024-11-20 12:55:44.116811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.116876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.742 [2024-11-20 12:55:44.116885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:18.742 [2024-11-20 12:55:44.116894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:18.742 [2024-11-20 12:55:44.116901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.742 [2024-11-20 12:55:44.117754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.109 ms, result 0 00:23:19.679  [2024-11-20T12:55:46.136Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-20T12:55:47.521Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-20T12:55:48.466Z] Copying: 31/1024 [MB] (10 MBps) [2024-11-20T12:55:49.408Z] Copying: 61/1024 [MB] (29 MBps) [2024-11-20T12:55:50.351Z] Copying: 76/1024 [MB] (14 MBps) [2024-11-20T12:55:51.294Z] Copying: 94/1024 [MB] (18 MBps) [2024-11-20T12:55:52.237Z] Copying: 126/1024 [MB] (31 MBps) [2024-11-20T12:55:53.182Z] Copying: 179/1024 [MB] (52 MBps) [2024-11-20T12:55:54.570Z] Copying: 203/1024 [MB] (24 MBps) [2024-11-20T12:55:55.141Z] Copying: 214/1024 [MB] (10 MBps) [2024-11-20T12:55:56.541Z] Copying: 235/1024 [MB] (21 MBps) [2024-11-20T12:55:57.486Z] Copying: 260/1024 [MB] (25 MBps) [2024-11-20T12:55:58.430Z] Copying: 280/1024 [MB] (19 MBps) [2024-11-20T12:55:59.371Z] Copying: 299/1024 [MB] (19 MBps) [2024-11-20T12:56:00.305Z] Copying: 315/1024 [MB] (15 MBps) [2024-11-20T12:56:01.250Z] Copying: 333/1024 [MB] (18 MBps) [2024-11-20T12:56:02.187Z] Copying: 349/1024 [MB] (16 MBps) [2024-11-20T12:56:03.567Z] Copying: 368/1024 [MB] (18 MBps) [2024-11-20T12:56:04.141Z] Copying: 389/1024 [MB] (21 MBps) [2024-11-20T12:56:05.527Z] Copying: 404/1024 [MB] (14 MBps) [2024-11-20T12:56:06.472Z] Copying: 418/1024 [MB] (14 MBps) [2024-11-20T12:56:07.416Z] Copying: 431/1024 [MB] (13 MBps) [2024-11-20T12:56:08.360Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-20T12:56:09.305Z] Copying: 453/1024 [MB] (10 MBps) [2024-11-20T12:56:10.250Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-20T12:56:11.232Z] Copying: 474/1024 [MB] (10 MBps) [2024-11-20T12:56:12.196Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-20T12:56:13.140Z] Copying: 527/1024 [MB] (43 MBps) [2024-11-20T12:56:14.523Z] Copying: 579/1024 [MB] (51 MBps) [2024-11-20T12:56:15.461Z] Copying: 594/1024 [MB] (15 MBps) [2024-11-20T12:56:16.403Z] Copying: 608/1024 [MB] (13 MBps) [2024-11-20T12:56:17.345Z] Copying: 620/1024 [MB] (12 MBps) [2024-11-20T12:56:18.288Z] Copying: 642/1024 [MB] (21 MBps) [2024-11-20T12:56:19.234Z] Copying: 659/1024 [MB] (17 MBps) [2024-11-20T12:56:20.177Z] Copying: 676/1024 [MB] (16 MBps) [2024-11-20T12:56:21.561Z] Copying: 695/1024 [MB] (19 MBps) [2024-11-20T12:56:22.502Z] Copying: 713/1024 [MB] (18 MBps) [2024-11-20T12:56:23.444Z] Copying: 734/1024 [MB] (20 MBps) [2024-11-20T12:56:24.386Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-20T12:56:25.330Z] Copying: 768/1024 [MB] (19 MBps) [2024-11-20T12:56:26.295Z] Copying: 786/1024 [MB] (18 MBps) [2024-11-20T12:56:27.238Z] Copying: 803/1024 [MB] (16 MBps) [2024-11-20T12:56:28.183Z] Copying: 817/1024 [MB] (14 MBps) [2024-11-20T12:56:29.571Z] Copying: 831/1024 [MB] (13 MBps) [2024-11-20T12:56:30.143Z] Copying: 843/1024 [MB] (12 MBps) [2024-11-20T12:56:31.531Z] Copying: 854/1024 [MB] (10 MBps) [2024-11-20T12:56:32.477Z] Copying: 864/1024 [MB] (10 MBps) [2024-11-20T12:56:33.422Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-20T12:56:34.368Z] Copying: 884/1024 [MB] (10 MBps) [2024-11-20T12:56:35.313Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-20T12:56:36.259Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-20T12:56:37.204Z] Copying: 915/1024 [MB] (10 MBps) [2024-11-20T12:56:38.150Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-20T12:56:39.539Z] Copying: 936/1024 [MB] (10 MBps) [2024-11-20T12:56:40.486Z] Copying: 946/1024 [MB] (10 MBps) [2024-11-20T12:56:41.419Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-20T12:56:42.426Z] Copying: 968/1024 [MB] (11 MBps) [2024-11-20T12:56:43.371Z] Copying: 987/1024 [MB] (19 MBps) [2024-11-20T12:56:44.314Z] Copying: 1002/1024 [MB] (15 MBps) [2024-11-20T12:56:45.257Z] Copying: 1020/1024 [MB] (17 MBps) [2024-11-20T12:56:45.257Z] Copying: 1048496/1048576 [kB] (3368 kBps) [2024-11-20T12:56:45.257Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-20 12:56:45.223636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.738 [2024-11-20 12:56:45.223906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:19.738 [2024-11-20 12:56:45.223937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:19.738 [2024-11-20 12:56:45.223958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.738 [2024-11-20 12:56:45.226170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:19.738 [2024-11-20 12:56:45.231207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.738 [2024-11-20 12:56:45.231387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:19.738 [2024-11-20 12:56:45.231409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.982 ms 00:24:19.738 [2024-11-20 12:56:45.231419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.738 [2024-11-20 12:56:45.243469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.738 [2024-11-20 12:56:45.243631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:19.738 [2024-11-20 12:56:45.243780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.016 ms 00:24:19.738 [2024-11-20 12:56:45.243815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.000 [2024-11-20 12:56:45.268921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.000 [2024-11-20 12:56:45.269083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:20.000 [2024-11-20 12:56:45.269257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.057 ms 00:24:20.000 [2024-11-20 12:56:45.269301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.000 [2024-11-20 12:56:45.275476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.000 [2024-11-20 12:56:45.275638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:20.000 [2024-11-20 12:56:45.275776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:24:20.000 [2024-11-20 12:56:45.275807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.000 [2024-11-20 12:56:45.302617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.000 [2024-11-20 12:56:45.302817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:20.000 [2024-11-20 12:56:45.302891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.725 ms 00:24:20.000 [2024-11-20 12:56:45.302914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.000 [2024-11-20 12:56:45.319693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.000 [2024-11-20 12:56:45.319957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:20.000 [2024-11-20 12:56:45.320358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.353 ms 00:24:20.000 [2024-11-20 12:56:45.320414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.000 [2024-11-20 12:56:45.514706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.000 [2024-11-20 12:56:45.514901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:20.000 [2024-11-20 12:56:45.514962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 194.151 ms 00:24:20.000 [2024-11-20 12:56:45.514986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.264 [2024-11-20 12:56:45.540724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.264 [2024-11-20 12:56:45.540911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:20.264 [2024-11-20 12:56:45.540969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.705 ms 00:24:20.264 [2024-11-20 12:56:45.540991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.264 [2024-11-20 12:56:45.566658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.264 [2024-11-20 12:56:45.566873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:20.264 [2024-11-20 12:56:45.566941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.544 ms 00:24:20.264 [2024-11-20 12:56:45.566963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.264 [2024-11-20 12:56:45.592528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.264 [2024-11-20 12:56:45.592733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:20.264 [2024-11-20 12:56:45.592829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.253 ms 00:24:20.264 [2024-11-20 12:56:45.592853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.264 [2024-11-20 12:56:45.617473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.264 [2024-11-20 12:56:45.617644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:20.264 [2024-11-20 12:56:45.617703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.526 ms 00:24:20.264 [2024-11-20 12:56:45.617724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.264 [2024-11-20 12:56:45.617794] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:20.264 [2024-11-20 12:56:45.617823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100864 / 261120 wr_cnt: 1 state: open 00:24:20.264 [2024-11-20 12:56:45.617856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.617891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.617919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:20.264 [2024-11-20 12:56:45.618900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.618998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:20.265 [2024-11-20 12:56:45.619421] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:20.265 [2024-11-20 12:56:45.619431] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 935d31be-3433-439f-a336-7f65533c8f51 00:24:20.265 [2024-11-20 12:56:45.619439] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100864 00:24:20.265 [2024-11-20 12:56:45.619447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101824 00:24:20.265 [2024-11-20 12:56:45.619455] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100864 00:24:20.265 [2024-11-20 12:56:45.619464] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:24:20.265 [2024-11-20 12:56:45.619472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:20.265 [2024-11-20 12:56:45.619487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:20.265 [2024-11-20 12:56:45.619506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:20.265 [2024-11-20 12:56:45.619513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:20.265 [2024-11-20 12:56:45.619521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:20.265 [2024-11-20 12:56:45.619531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.265 [2024-11-20 12:56:45.619540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:20.265 [2024-11-20 12:56:45.619550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:24:20.265 [2024-11-20 12:56:45.619558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.265 [2024-11-20 12:56:45.633139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.265 [2024-11-20 12:56:45.633321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:20.265 [2024-11-20 12:56:45.633340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.545 ms 00:24:20.265 [2024-11-20 12:56:45.633356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.265 [2024-11-20 12:56:45.633797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.265 [2024-11-20 12:56:45.633813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:20.265 [2024-11-20 12:56:45.633823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:24:20.265 [2024-11-20 12:56:45.633832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.265 [2024-11-20 12:56:45.670114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.266 [2024-11-20 12:56:45.670163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:20.266 [2024-11-20 12:56:45.670181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.266 [2024-11-20 12:56:45.670191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.266 [2024-11-20 12:56:45.670252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.266 [2024-11-20 12:56:45.670262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:20.266 [2024-11-20 12:56:45.670272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.266 [2024-11-20 12:56:45.670281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.266 [2024-11-20 12:56:45.670343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.266 [2024-11-20 12:56:45.670355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:20.266 [2024-11-20 12:56:45.670364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.266 [2024-11-20 12:56:45.670377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.266 [2024-11-20 12:56:45.670395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.266 [2024-11-20 12:56:45.670404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:20.266 [2024-11-20 12:56:45.670413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.266 [2024-11-20 12:56:45.670422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.266 [2024-11-20 12:56:45.753972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.266 [2024-11-20 12:56:45.754033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:20.266 [2024-11-20 12:56:45.754054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.266 [2024-11-20 12:56:45.754063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.822733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.527 [2024-11-20 12:56:45.822803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:20.527 [2024-11-20 12:56:45.822815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.527 [2024-11-20 12:56:45.822823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.822906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.527 [2024-11-20 12:56:45.822917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:20.527 [2024-11-20 12:56:45.822926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.527 [2024-11-20 12:56:45.822935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.822978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.527 [2024-11-20 12:56:45.822988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:20.527 [2024-11-20 12:56:45.822996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.527 [2024-11-20 12:56:45.823005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.823108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.527 [2024-11-20 12:56:45.823118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:20.527 [2024-11-20 12:56:45.823127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.527 [2024-11-20 12:56:45.823135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.823170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.527 [2024-11-20 12:56:45.823180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:20.527 [2024-11-20 12:56:45.823188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.527 [2024-11-20 12:56:45.823196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.823239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.527 [2024-11-20 12:56:45.823250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:20.527 [2024-11-20 12:56:45.823258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.527 [2024-11-20 12:56:45.823267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.823319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.527 [2024-11-20 12:56:45.823329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:20.527 [2024-11-20 12:56:45.823338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.527 [2024-11-20 12:56:45.823347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.527 [2024-11-20 12:56:45.823483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 603.298 ms, result 0 00:24:21.909 00:24:21.909 00:24:21.909 12:56:47 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:21.909 [2024-11-20 12:56:47.262448] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:24:21.909 [2024-11-20 12:56:47.262585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79439 ] 00:24:22.169 [2024-11-20 12:56:47.426266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.169 [2024-11-20 12:56:47.549598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.429 [2024-11-20 12:56:47.838541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:22.429 [2024-11-20 12:56:47.838621] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:22.691 [2024-11-20 12:56:47.999510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:47.999579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:22.691 [2024-11-20 12:56:47.999599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:22.691 [2024-11-20 12:56:47.999608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:47.999665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:47.999676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.691 [2024-11-20 12:56:47.999688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:22.691 [2024-11-20 12:56:47.999696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:47.999717] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:22.691 [2024-11-20 12:56:48.000506] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:22.691 [2024-11-20 12:56:48.000529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.000538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.691 [2024-11-20 12:56:48.000547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:24:22.691 [2024-11-20 12:56:48.000555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.002294] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:22.691 [2024-11-20 12:56:48.016479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.016690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:22.691 [2024-11-20 12:56:48.016713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.187 ms 00:24:22.691 [2024-11-20 12:56:48.016723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.016822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.016832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:22.691 [2024-11-20 12:56:48.016841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:22.691 [2024-11-20 12:56:48.016850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.024974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.025016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.691 [2024-11-20 12:56:48.025026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.044 ms 00:24:22.691 [2024-11-20 12:56:48.025035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.025119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.025129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.691 [2024-11-20 12:56:48.025138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:22.691 [2024-11-20 12:56:48.025146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.025192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.025202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:22.691 [2024-11-20 12:56:48.025210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:22.691 [2024-11-20 12:56:48.025217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.025239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.691 [2024-11-20 12:56:48.029395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.029449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.691 [2024-11-20 12:56:48.029460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.162 ms 00:24:22.691 [2024-11-20 12:56:48.029471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.029505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.029514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:22.691 [2024-11-20 12:56:48.029522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:22.691 [2024-11-20 12:56:48.029530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.029583] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:22.691 [2024-11-20 12:56:48.029606] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:22.691 [2024-11-20 12:56:48.029644] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:22.691 [2024-11-20 12:56:48.029662] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:22.691 [2024-11-20 12:56:48.029794] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:22.691 [2024-11-20 12:56:48.029807] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:22.691 [2024-11-20 12:56:48.029819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:22.691 [2024-11-20 12:56:48.029830] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:22.691 [2024-11-20 12:56:48.029839] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:22.691 [2024-11-20 12:56:48.029848] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:22.691 [2024-11-20 12:56:48.029855] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:22.691 [2024-11-20 12:56:48.029864] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:22.691 [2024-11-20 12:56:48.029871] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:22.691 [2024-11-20 12:56:48.029883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.029891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:22.691 [2024-11-20 12:56:48.029899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:24:22.691 [2024-11-20 12:56:48.029907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.029990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.691 [2024-11-20 12:56:48.029999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:22.691 [2024-11-20 12:56:48.030007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:22.691 [2024-11-20 12:56:48.030014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.691 [2024-11-20 12:56:48.030120] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:22.691 [2024-11-20 12:56:48.030133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:22.691 [2024-11-20 12:56:48.030142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.691 [2024-11-20 12:56:48.030150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:22.692 [2024-11-20 12:56:48.030166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:22.692 [2024-11-20 12:56:48.030190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.692 [2024-11-20 12:56:48.030206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:22.692 [2024-11-20 12:56:48.030212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:22.692 [2024-11-20 12:56:48.030219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.692 [2024-11-20 12:56:48.030227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:22.692 [2024-11-20 12:56:48.030235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:22.692 [2024-11-20 12:56:48.030248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:22.692 [2024-11-20 12:56:48.030263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:22.692 [2024-11-20 12:56:48.030284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:22.692 [2024-11-20 12:56:48.030304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:22.692 [2024-11-20 12:56:48.030325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:22.692 [2024-11-20 12:56:48.030344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:22.692 [2024-11-20 12:56:48.030366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.692 [2024-11-20 12:56:48.030379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:22.692 [2024-11-20 12:56:48.030385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:22.692 [2024-11-20 12:56:48.030392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.692 [2024-11-20 12:56:48.030399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:22.692 [2024-11-20 12:56:48.030405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:22.692 [2024-11-20 12:56:48.030411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:22.692 [2024-11-20 12:56:48.030424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:22.692 [2024-11-20 12:56:48.030432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030438] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:22.692 [2024-11-20 12:56:48.030446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:22.692 [2024-11-20 12:56:48.030454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.692 [2024-11-20 12:56:48.030468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:22.692 [2024-11-20 12:56:48.030475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:22.692 [2024-11-20 12:56:48.030483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:22.692 [2024-11-20 12:56:48.030491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:22.692 [2024-11-20 12:56:48.030498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:22.692 [2024-11-20 12:56:48.030505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:22.692 [2024-11-20 12:56:48.030512] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:22.692 [2024-11-20 12:56:48.030521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.692 [2024-11-20 12:56:48.030530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:22.692 [2024-11-20 12:56:48.030538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:22.692 [2024-11-20 12:56:48.030545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:22.692 [2024-11-20 12:56:48.030552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:22.692 [2024-11-20 12:56:48.030559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:22.692 [2024-11-20 12:56:48.030566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:22.692 [2024-11-20 12:56:48.030573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:22.692 [2024-11-20 12:56:48.030581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:22.692 [2024-11-20 12:56:48.030588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:22.692 [2024-11-20 12:56:48.030596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:22.692 [2024-11-20 12:56:48.030603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:22.692 [2024-11-20 12:56:48.030610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:22.692 [2024-11-20 12:56:48.030617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:22.692 [2024-11-20 12:56:48.030624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:22.692 [2024-11-20 12:56:48.030632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:22.692 [2024-11-20 12:56:48.030643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.692 [2024-11-20 12:56:48.030651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.692 [2024-11-20 12:56:48.030658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:22.692 [2024-11-20 12:56:48.030665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:22.692 [2024-11-20 12:56:48.030673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:22.692 [2024-11-20 12:56:48.030680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.692 [2024-11-20 12:56:48.030688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:22.692 [2024-11-20 12:56:48.030695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:24:22.692 [2024-11-20 12:56:48.030703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.692 [2024-11-20 12:56:48.062981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.692 [2024-11-20 12:56:48.063161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.692 [2024-11-20 12:56:48.063221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.220 ms 00:24:22.692 [2024-11-20 12:56:48.063246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.692 [2024-11-20 12:56:48.063358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.692 [2024-11-20 12:56:48.063381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:22.692 [2024-11-20 12:56:48.063401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:22.692 [2024-11-20 12:56:48.063420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.692 [2024-11-20 12:56:48.108577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.692 [2024-11-20 12:56:48.108790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.692 [2024-11-20 12:56:48.108863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.085 ms 00:24:22.692 [2024-11-20 12:56:48.108888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.692 [2024-11-20 12:56:48.108953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.692 [2024-11-20 12:56:48.108978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.692 [2024-11-20 12:56:48.109000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:22.692 [2024-11-20 12:56:48.109026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.692 [2024-11-20 12:56:48.109596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.692 [2024-11-20 12:56:48.109734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.692 [2024-11-20 12:56:48.109915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:24:22.692 [2024-11-20 12:56:48.109956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.692 [2024-11-20 12:56:48.110163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.692 [2024-11-20 12:56:48.110578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.692 [2024-11-20 12:56:48.111054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:22.692 [2024-11-20 12:56:48.111161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.692 [2024-11-20 12:56:48.126973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.693 [2024-11-20 12:56:48.127138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.693 [2024-11-20 12:56:48.127207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.746 ms 00:24:22.693 [2024-11-20 12:56:48.127230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.693 [2024-11-20 12:56:48.141574] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:22.693 [2024-11-20 12:56:48.141771] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:22.693 [2024-11-20 12:56:48.141792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.693 [2024-11-20 12:56:48.141802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:22.693 [2024-11-20 12:56:48.141812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.266 ms 00:24:22.693 [2024-11-20 12:56:48.141820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.693 [2024-11-20 12:56:48.167651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.693 [2024-11-20 12:56:48.167708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:22.693 [2024-11-20 12:56:48.167721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.787 ms 00:24:22.693 [2024-11-20 12:56:48.167729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.693 [2024-11-20 12:56:48.180544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.693 [2024-11-20 12:56:48.180601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:22.693 [2024-11-20 12:56:48.180613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.745 ms 00:24:22.693 [2024-11-20 12:56:48.180621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.693 [2024-11-20 12:56:48.193115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.693 [2024-11-20 12:56:48.193290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:22.693 [2024-11-20 12:56:48.193310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:24:22.693 [2024-11-20 12:56:48.193318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.693 [2024-11-20 12:56:48.193996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.693 [2024-11-20 12:56:48.194025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:22.693 [2024-11-20 12:56:48.194036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:24:22.693 [2024-11-20 12:56:48.194047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.259087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.259156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:22.954 [2024-11-20 12:56:48.259180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.017 ms 00:24:22.954 [2024-11-20 12:56:48.259189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.271921] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:22.954 [2024-11-20 12:56:48.275443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.275488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:22.954 [2024-11-20 12:56:48.275501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.194 ms 00:24:22.954 [2024-11-20 12:56:48.275511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.275609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.275621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:22.954 [2024-11-20 12:56:48.275632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:22.954 [2024-11-20 12:56:48.275643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.277389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.277437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:22.954 [2024-11-20 12:56:48.277449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:24:22.954 [2024-11-20 12:56:48.277457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.277488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.277497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.954 [2024-11-20 12:56:48.277507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:22.954 [2024-11-20 12:56:48.277515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.277558] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:22.954 [2024-11-20 12:56:48.277572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.277581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:22.954 [2024-11-20 12:56:48.277590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:22.954 [2024-11-20 12:56:48.277598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.303367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.303417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.954 [2024-11-20 12:56:48.303431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.749 ms 00:24:22.954 [2024-11-20 12:56:48.303445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.303542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.954 [2024-11-20 12:56:48.303553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.954 [2024-11-20 12:56:48.303564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:22.954 [2024-11-20 12:56:48.303572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.954 [2024-11-20 12:56:48.304899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.866 ms, result 0 00:24:24.337  [2024-11-20T12:56:50.796Z] Copying: 10088/1048576 [kB] (10088 kBps) [2024-11-20T12:56:51.736Z] Copying: 27/1024 [MB] (17 MBps) [2024-11-20T12:56:52.678Z] Copying: 45/1024 [MB] (18 MBps) [2024-11-20T12:56:53.618Z] Copying: 57/1024 [MB] (12 MBps) [2024-11-20T12:56:54.561Z] Copying: 78/1024 [MB] (20 MBps) [2024-11-20T12:56:55.503Z] Copying: 97/1024 [MB] (19 MBps) [2024-11-20T12:56:56.891Z] Copying: 121/1024 [MB] (23 MBps) [2024-11-20T12:56:57.835Z] Copying: 139/1024 [MB] (17 MBps) [2024-11-20T12:56:58.778Z] Copying: 158/1024 [MB] (19 MBps) [2024-11-20T12:56:59.717Z] Copying: 175/1024 [MB] (16 MBps) [2024-11-20T12:57:00.655Z] Copying: 195/1024 [MB] (20 MBps) [2024-11-20T12:57:01.600Z] Copying: 212/1024 [MB] (16 MBps) [2024-11-20T12:57:02.542Z] Copying: 232/1024 [MB] (19 MBps) [2024-11-20T12:57:03.941Z] Copying: 255/1024 [MB] (23 MBps) [2024-11-20T12:57:04.515Z] Copying: 279/1024 [MB] (24 MBps) [2024-11-20T12:57:05.566Z] Copying: 294/1024 [MB] (14 MBps) [2024-11-20T12:57:06.515Z] Copying: 305/1024 [MB] (10 MBps) [2024-11-20T12:57:07.903Z] Copying: 319/1024 [MB] (14 MBps) [2024-11-20T12:57:08.848Z] Copying: 332/1024 [MB] (12 MBps) [2024-11-20T12:57:09.792Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-20T12:57:10.735Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-20T12:57:11.681Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-20T12:57:12.628Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-20T12:57:13.572Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-20T12:57:14.516Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-20T12:57:15.901Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-20T12:57:16.846Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-20T12:57:17.816Z] Copying: 452/1024 [MB] (31 MBps) [2024-11-20T12:57:18.765Z] Copying: 470/1024 [MB] (18 MBps) [2024-11-20T12:57:19.710Z] Copying: 490/1024 [MB] (19 MBps) [2024-11-20T12:57:20.653Z] Copying: 508/1024 [MB] (18 MBps) [2024-11-20T12:57:21.598Z] Copying: 530/1024 [MB] (21 MBps) [2024-11-20T12:57:22.544Z] Copying: 549/1024 [MB] (19 MBps) [2024-11-20T12:57:23.933Z] Copying: 570/1024 [MB] (20 MBps) [2024-11-20T12:57:24.505Z] Copying: 589/1024 [MB] (19 MBps) [2024-11-20T12:57:25.892Z] Copying: 607/1024 [MB] (17 MBps) [2024-11-20T12:57:26.837Z] Copying: 620/1024 [MB] (12 MBps) [2024-11-20T12:57:27.783Z] Copying: 636/1024 [MB] (16 MBps) [2024-11-20T12:57:28.729Z] Copying: 654/1024 [MB] (17 MBps) [2024-11-20T12:57:29.675Z] Copying: 672/1024 [MB] (18 MBps) [2024-11-20T12:57:30.618Z] Copying: 693/1024 [MB] (20 MBps) [2024-11-20T12:57:31.563Z] Copying: 713/1024 [MB] (19 MBps) [2024-11-20T12:57:32.508Z] Copying: 740/1024 [MB] (26 MBps) [2024-11-20T12:57:33.898Z] Copying: 768/1024 [MB] (27 MBps) [2024-11-20T12:57:34.843Z] Copying: 783/1024 [MB] (15 MBps) [2024-11-20T12:57:35.803Z] Copying: 802/1024 [MB] (18 MBps) [2024-11-20T12:57:36.750Z] Copying: 823/1024 [MB] (20 MBps) [2024-11-20T12:57:37.691Z] Copying: 840/1024 [MB] (17 MBps) [2024-11-20T12:57:38.634Z] Copying: 860/1024 [MB] (19 MBps) [2024-11-20T12:57:39.578Z] Copying: 877/1024 [MB] (17 MBps) [2024-11-20T12:57:40.522Z] Copying: 894/1024 [MB] (16 MBps) [2024-11-20T12:57:41.912Z] Copying: 915/1024 [MB] (21 MBps) [2024-11-20T12:57:42.856Z] Copying: 936/1024 [MB] (21 MBps) [2024-11-20T12:57:43.801Z] Copying: 955/1024 [MB] (19 MBps) [2024-11-20T12:57:44.747Z] Copying: 972/1024 [MB] (16 MBps) [2024-11-20T12:57:45.692Z] Copying: 994/1024 [MB] (22 MBps) [2024-11-20T12:57:46.638Z] Copying: 1011/1024 [MB] (17 MBps) [2024-11-20T12:57:46.900Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-20 12:57:46.657807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.381 [2024-11-20 12:57:46.657889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:21.381 [2024-11-20 12:57:46.657906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:21.381 [2024-11-20 12:57:46.657915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.381 [2024-11-20 12:57:46.657949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:21.381 [2024-11-20 12:57:46.661260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.381 [2024-11-20 12:57:46.661467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:21.381 [2024-11-20 12:57:46.661491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.293 ms 00:25:21.381 [2024-11-20 12:57:46.661500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.381 [2024-11-20 12:57:46.662174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.381 [2024-11-20 12:57:46.662210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:21.381 [2024-11-20 12:57:46.662220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:25:21.381 [2024-11-20 12:57:46.662229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.381 [2024-11-20 12:57:46.668385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.381 [2024-11-20 12:57:46.668435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:21.381 [2024-11-20 12:57:46.668446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:25:21.381 [2024-11-20 12:57:46.668454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.381 [2024-11-20 12:57:46.675348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.381 [2024-11-20 12:57:46.675392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:21.381 [2024-11-20 12:57:46.675403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:25:21.381 [2024-11-20 12:57:46.675412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.381 [2024-11-20 12:57:46.703697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.381 [2024-11-20 12:57:46.703754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:21.381 [2024-11-20 12:57:46.703767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.231 ms 00:25:21.381 [2024-11-20 12:57:46.703775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.381 [2024-11-20 12:57:46.719671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.381 [2024-11-20 12:57:46.719723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:21.381 [2024-11-20 12:57:46.719761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.848 ms 00:25:21.381 [2024-11-20 12:57:46.719770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.643 [2024-11-20 12:57:47.082393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.643 [2024-11-20 12:57:47.082469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:21.643 [2024-11-20 12:57:47.082484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 362.569 ms 00:25:21.643 [2024-11-20 12:57:47.082493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.643 [2024-11-20 12:57:47.108498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.643 [2024-11-20 12:57:47.108698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:21.643 [2024-11-20 12:57:47.108720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.987 ms 00:25:21.643 [2024-11-20 12:57:47.108729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.643 [2024-11-20 12:57:47.133944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.643 [2024-11-20 12:57:47.133989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:21.643 [2024-11-20 12:57:47.134015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.141 ms 00:25:21.643 [2024-11-20 12:57:47.134022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.643 [2024-11-20 12:57:47.158590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.643 [2024-11-20 12:57:47.158635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:21.643 [2024-11-20 12:57:47.158648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.522 ms 00:25:21.643 [2024-11-20 12:57:47.158656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.907 [2024-11-20 12:57:47.183273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.907 [2024-11-20 12:57:47.183317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:21.907 [2024-11-20 12:57:47.183329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.502 ms 00:25:21.907 [2024-11-20 12:57:47.183337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.907 [2024-11-20 12:57:47.183379] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:21.907 [2024-11-20 12:57:47.183395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:21.907 [2024-11-20 12:57:47.183407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:21.907 [2024-11-20 12:57:47.183845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.183993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:21.908 [2024-11-20 12:57:47.184270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:21.908 [2024-11-20 12:57:47.184278] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 935d31be-3433-439f-a336-7f65533c8f51 00:25:21.908 [2024-11-20 12:57:47.184287] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:21.908 [2024-11-20 12:57:47.184294] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31168 00:25:21.908 [2024-11-20 12:57:47.184302] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30208 00:25:21.908 [2024-11-20 12:57:47.184311] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0318 00:25:21.908 [2024-11-20 12:57:47.184319] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:21.908 [2024-11-20 12:57:47.184333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:21.908 [2024-11-20 12:57:47.184340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:21.908 [2024-11-20 12:57:47.184354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:21.908 [2024-11-20 12:57:47.184361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:21.908 [2024-11-20 12:57:47.184368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.908 [2024-11-20 12:57:47.184377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:21.908 [2024-11-20 12:57:47.184386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:25:21.908 [2024-11-20 12:57:47.184394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.197808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.908 [2024-11-20 12:57:47.197981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:21.908 [2024-11-20 12:57:47.197999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:25:21.908 [2024-11-20 12:57:47.198015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.198414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.908 [2024-11-20 12:57:47.198426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:21.908 [2024-11-20 12:57:47.198436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:25:21.908 [2024-11-20 12:57:47.198443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.234893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.234939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.908 [2024-11-20 12:57:47.234954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.234963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.235037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.235047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.908 [2024-11-20 12:57:47.235057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.235066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.235137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.235148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.908 [2024-11-20 12:57:47.235158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.235170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.235186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.235196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.908 [2024-11-20 12:57:47.235205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.235213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.319500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.319550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.908 [2024-11-20 12:57:47.319572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.319581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.388407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.388465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.908 [2024-11-20 12:57:47.388478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.388486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.388581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.388591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:21.908 [2024-11-20 12:57:47.388600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.388609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.908 [2024-11-20 12:57:47.388653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.908 [2024-11-20 12:57:47.388663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:21.908 [2024-11-20 12:57:47.388672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.908 [2024-11-20 12:57:47.388680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.909 [2024-11-20 12:57:47.388799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.909 [2024-11-20 12:57:47.388811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:21.909 [2024-11-20 12:57:47.388820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.909 [2024-11-20 12:57:47.388828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.909 [2024-11-20 12:57:47.388862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.909 [2024-11-20 12:57:47.388872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:21.909 [2024-11-20 12:57:47.388882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.909 [2024-11-20 12:57:47.388891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.909 [2024-11-20 12:57:47.388946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.909 [2024-11-20 12:57:47.388957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:21.909 [2024-11-20 12:57:47.388966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.909 [2024-11-20 12:57:47.388975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.909 [2024-11-20 12:57:47.389025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.909 [2024-11-20 12:57:47.389036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:21.909 [2024-11-20 12:57:47.389045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.909 [2024-11-20 12:57:47.389053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.909 [2024-11-20 12:57:47.389190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 731.345 ms, result 0 00:25:22.854 00:25:22.854 00:25:22.854 12:57:48 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:25.478 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:25.478 Process with pid 77260 is not found 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77260 00:25:25.478 12:57:50 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77260 ']' 00:25:25.478 12:57:50 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77260 00:25:25.478 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77260) - No such process 00:25:25.478 12:57:50 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77260 is not found' 00:25:25.478 Remove shared memory files 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:25.478 12:57:50 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:25.478 ************************************ 00:25:25.478 END TEST ftl_restore 00:25:25.478 ************************************ 00:25:25.478 00:25:25.478 real 4m30.554s 00:25:25.478 user 4m18.710s 00:25:25.478 sys 0m11.590s 00:25:25.478 12:57:50 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.478 12:57:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:25.478 12:57:50 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:25.478 12:57:50 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:25.478 12:57:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.478 12:57:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:25.478 ************************************ 00:25:25.478 START TEST ftl_dirty_shutdown 00:25:25.478 ************************************ 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:25.478 * Looking for test storage... 00:25:25.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.478 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:25.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.478 --rc genhtml_branch_coverage=1 00:25:25.478 --rc genhtml_function_coverage=1 00:25:25.478 --rc genhtml_legend=1 00:25:25.478 --rc geninfo_all_blocks=1 00:25:25.478 --rc geninfo_unexecuted_blocks=1 00:25:25.479 00:25:25.479 ' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:25.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.479 --rc genhtml_branch_coverage=1 00:25:25.479 --rc genhtml_function_coverage=1 00:25:25.479 --rc genhtml_legend=1 00:25:25.479 --rc geninfo_all_blocks=1 00:25:25.479 --rc geninfo_unexecuted_blocks=1 00:25:25.479 00:25:25.479 ' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:25.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.479 --rc genhtml_branch_coverage=1 00:25:25.479 --rc genhtml_function_coverage=1 00:25:25.479 --rc genhtml_legend=1 00:25:25.479 --rc geninfo_all_blocks=1 00:25:25.479 --rc geninfo_unexecuted_blocks=1 00:25:25.479 00:25:25.479 ' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:25.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.479 --rc genhtml_branch_coverage=1 00:25:25.479 --rc genhtml_function_coverage=1 00:25:25.479 --rc genhtml_legend=1 00:25:25.479 --rc geninfo_all_blocks=1 00:25:25.479 --rc geninfo_unexecuted_blocks=1 00:25:25.479 00:25:25.479 ' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80151 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80151 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80151 ']' 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.479 12:57:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:25.479 [2024-11-20 12:57:50.826668] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:25:25.479 [2024-11-20 12:57:50.827076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80151 ] 00:25:25.479 [2024-11-20 12:57:50.992669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.741 [2024-11-20 12:57:51.119367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:26.688 12:57:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:26.688 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:26.951 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:26.951 { 00:25:26.951 "name": "nvme0n1", 00:25:26.951 "aliases": [ 00:25:26.951 "05ce64ae-5c4a-49d7-93d0-263053ec26a1" 00:25:26.951 ], 00:25:26.951 "product_name": "NVMe disk", 00:25:26.951 "block_size": 4096, 00:25:26.951 "num_blocks": 1310720, 00:25:26.951 "uuid": "05ce64ae-5c4a-49d7-93d0-263053ec26a1", 00:25:26.951 "numa_id": -1, 00:25:26.951 "assigned_rate_limits": { 00:25:26.951 "rw_ios_per_sec": 0, 00:25:26.951 "rw_mbytes_per_sec": 0, 00:25:26.951 "r_mbytes_per_sec": 0, 00:25:26.951 "w_mbytes_per_sec": 0 00:25:26.951 }, 00:25:26.951 "claimed": true, 00:25:26.951 "claim_type": "read_many_write_one", 00:25:26.951 "zoned": false, 00:25:26.951 "supported_io_types": { 00:25:26.951 "read": true, 00:25:26.951 "write": true, 00:25:26.951 "unmap": true, 00:25:26.951 "flush": true, 00:25:26.951 "reset": true, 00:25:26.951 "nvme_admin": true, 00:25:26.951 "nvme_io": true, 00:25:26.951 "nvme_io_md": false, 00:25:26.951 "write_zeroes": true, 00:25:26.951 "zcopy": false, 00:25:26.951 "get_zone_info": false, 00:25:26.951 "zone_management": false, 00:25:26.951 "zone_append": false, 00:25:26.951 "compare": true, 00:25:26.951 "compare_and_write": false, 00:25:26.951 "abort": true, 00:25:26.951 "seek_hole": false, 00:25:26.951 "seek_data": false, 00:25:26.951 "copy": true, 00:25:26.951 "nvme_iov_md": false 00:25:26.951 }, 00:25:26.951 "driver_specific": { 00:25:26.951 "nvme": [ 00:25:26.951 { 00:25:26.951 "pci_address": "0000:00:11.0", 00:25:26.951 "trid": { 00:25:26.951 "trtype": "PCIe", 00:25:26.951 "traddr": "0000:00:11.0" 00:25:26.951 }, 00:25:26.951 "ctrlr_data": { 00:25:26.951 "cntlid": 0, 00:25:26.951 "vendor_id": "0x1b36", 00:25:26.951 "model_number": "QEMU NVMe Ctrl", 00:25:26.951 "serial_number": "12341", 00:25:26.951 "firmware_revision": "8.0.0", 00:25:26.951 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:26.951 "oacs": { 00:25:26.951 "security": 0, 00:25:26.951 "format": 1, 00:25:26.951 "firmware": 0, 00:25:26.951 "ns_manage": 1 00:25:26.951 }, 00:25:26.951 "multi_ctrlr": false, 00:25:26.951 "ana_reporting": false 00:25:26.951 }, 00:25:26.951 "vs": { 00:25:26.951 "nvme_version": "1.4" 00:25:26.951 }, 00:25:26.951 "ns_data": { 00:25:26.951 "id": 1, 00:25:26.951 "can_share": false 00:25:26.951 } 00:25:26.951 } 00:25:26.951 ], 00:25:26.951 "mp_policy": "active_passive" 00:25:26.951 } 00:25:26.951 } 00:25:26.951 ]' 00:25:26.951 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:26.951 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:26.951 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=8de3f431-add8-43da-957f-ad5c710e8b62 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:27.214 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8de3f431-add8-43da-957f-ad5c710e8b62 00:25:27.477 12:57:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:27.738 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e9f75d8f-69d6-4953-85e4-ce3927fa5828 00:25:27.739 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e9f75d8f-69d6-4953-85e4-ce3927fa5828 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:28.000 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:28.262 { 00:25:28.262 "name": "7ac855d4-b2eb-46b0-b1ff-0c436d9ab000", 00:25:28.262 "aliases": [ 00:25:28.262 "lvs/nvme0n1p0" 00:25:28.262 ], 00:25:28.262 "product_name": "Logical Volume", 00:25:28.262 "block_size": 4096, 00:25:28.262 "num_blocks": 26476544, 00:25:28.262 "uuid": "7ac855d4-b2eb-46b0-b1ff-0c436d9ab000", 00:25:28.262 "assigned_rate_limits": { 00:25:28.262 "rw_ios_per_sec": 0, 00:25:28.262 "rw_mbytes_per_sec": 0, 00:25:28.262 "r_mbytes_per_sec": 0, 00:25:28.262 "w_mbytes_per_sec": 0 00:25:28.262 }, 00:25:28.262 "claimed": false, 00:25:28.262 "zoned": false, 00:25:28.262 "supported_io_types": { 00:25:28.262 "read": true, 00:25:28.262 "write": true, 00:25:28.262 "unmap": true, 00:25:28.262 "flush": false, 00:25:28.262 "reset": true, 00:25:28.262 "nvme_admin": false, 00:25:28.262 "nvme_io": false, 00:25:28.262 "nvme_io_md": false, 00:25:28.262 "write_zeroes": true, 00:25:28.262 "zcopy": false, 00:25:28.262 "get_zone_info": false, 00:25:28.262 "zone_management": false, 00:25:28.262 "zone_append": false, 00:25:28.262 "compare": false, 00:25:28.262 "compare_and_write": false, 00:25:28.262 "abort": false, 00:25:28.262 "seek_hole": true, 00:25:28.262 "seek_data": true, 00:25:28.262 "copy": false, 00:25:28.262 "nvme_iov_md": false 00:25:28.262 }, 00:25:28.262 "driver_specific": { 00:25:28.262 "lvol": { 00:25:28.262 "lvol_store_uuid": "e9f75d8f-69d6-4953-85e4-ce3927fa5828", 00:25:28.262 "base_bdev": "nvme0n1", 00:25:28.262 "thin_provision": true, 00:25:28.262 "num_allocated_clusters": 0, 00:25:28.262 "snapshot": false, 00:25:28.262 "clone": false, 00:25:28.262 "esnap_clone": false 00:25:28.262 } 00:25:28.262 } 00:25:28.262 } 00:25:28.262 ]' 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:28.262 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:28.525 12:57:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:28.787 { 00:25:28.787 "name": "7ac855d4-b2eb-46b0-b1ff-0c436d9ab000", 00:25:28.787 "aliases": [ 00:25:28.787 "lvs/nvme0n1p0" 00:25:28.787 ], 00:25:28.787 "product_name": "Logical Volume", 00:25:28.787 "block_size": 4096, 00:25:28.787 "num_blocks": 26476544, 00:25:28.787 "uuid": "7ac855d4-b2eb-46b0-b1ff-0c436d9ab000", 00:25:28.787 "assigned_rate_limits": { 00:25:28.787 "rw_ios_per_sec": 0, 00:25:28.787 "rw_mbytes_per_sec": 0, 00:25:28.787 "r_mbytes_per_sec": 0, 00:25:28.787 "w_mbytes_per_sec": 0 00:25:28.787 }, 00:25:28.787 "claimed": false, 00:25:28.787 "zoned": false, 00:25:28.787 "supported_io_types": { 00:25:28.787 "read": true, 00:25:28.787 "write": true, 00:25:28.787 "unmap": true, 00:25:28.787 "flush": false, 00:25:28.787 "reset": true, 00:25:28.787 "nvme_admin": false, 00:25:28.787 "nvme_io": false, 00:25:28.787 "nvme_io_md": false, 00:25:28.787 "write_zeroes": true, 00:25:28.787 "zcopy": false, 00:25:28.787 "get_zone_info": false, 00:25:28.787 "zone_management": false, 00:25:28.787 "zone_append": false, 00:25:28.787 "compare": false, 00:25:28.787 "compare_and_write": false, 00:25:28.787 "abort": false, 00:25:28.787 "seek_hole": true, 00:25:28.787 "seek_data": true, 00:25:28.787 "copy": false, 00:25:28.787 "nvme_iov_md": false 00:25:28.787 }, 00:25:28.787 "driver_specific": { 00:25:28.787 "lvol": { 00:25:28.787 "lvol_store_uuid": "e9f75d8f-69d6-4953-85e4-ce3927fa5828", 00:25:28.787 "base_bdev": "nvme0n1", 00:25:28.787 "thin_provision": true, 00:25:28.787 "num_allocated_clusters": 0, 00:25:28.787 "snapshot": false, 00:25:28.787 "clone": false, 00:25:28.787 "esnap_clone": false 00:25:28.787 } 00:25:28.787 } 00:25:28.787 } 00:25:28.787 ]' 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:28.787 12:57:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:29.049 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:29.050 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:29.050 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:29.050 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:29.050 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:29.050 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:29.050 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 00:25:29.311 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:29.312 { 00:25:29.312 "name": "7ac855d4-b2eb-46b0-b1ff-0c436d9ab000", 00:25:29.312 "aliases": [ 00:25:29.312 "lvs/nvme0n1p0" 00:25:29.312 ], 00:25:29.312 "product_name": "Logical Volume", 00:25:29.312 "block_size": 4096, 00:25:29.312 "num_blocks": 26476544, 00:25:29.312 "uuid": "7ac855d4-b2eb-46b0-b1ff-0c436d9ab000", 00:25:29.312 "assigned_rate_limits": { 00:25:29.312 "rw_ios_per_sec": 0, 00:25:29.312 "rw_mbytes_per_sec": 0, 00:25:29.312 "r_mbytes_per_sec": 0, 00:25:29.312 "w_mbytes_per_sec": 0 00:25:29.312 }, 00:25:29.312 "claimed": false, 00:25:29.312 "zoned": false, 00:25:29.312 "supported_io_types": { 00:25:29.312 "read": true, 00:25:29.312 "write": true, 00:25:29.312 "unmap": true, 00:25:29.312 "flush": false, 00:25:29.312 "reset": true, 00:25:29.312 "nvme_admin": false, 00:25:29.312 "nvme_io": false, 00:25:29.312 "nvme_io_md": false, 00:25:29.312 "write_zeroes": true, 00:25:29.312 "zcopy": false, 00:25:29.312 "get_zone_info": false, 00:25:29.312 "zone_management": false, 00:25:29.312 "zone_append": false, 00:25:29.312 "compare": false, 00:25:29.312 "compare_and_write": false, 00:25:29.312 "abort": false, 00:25:29.312 "seek_hole": true, 00:25:29.312 "seek_data": true, 00:25:29.312 "copy": false, 00:25:29.312 "nvme_iov_md": false 00:25:29.312 }, 00:25:29.312 "driver_specific": { 00:25:29.312 "lvol": { 00:25:29.312 "lvol_store_uuid": "e9f75d8f-69d6-4953-85e4-ce3927fa5828", 00:25:29.312 "base_bdev": "nvme0n1", 00:25:29.312 "thin_provision": true, 00:25:29.312 "num_allocated_clusters": 0, 00:25:29.312 "snapshot": false, 00:25:29.312 "clone": false, 00:25:29.312 "esnap_clone": false 00:25:29.312 } 00:25:29.312 } 00:25:29.312 } 00:25:29.312 ]' 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 --l2p_dram_limit 10' 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:29.312 12:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7ac855d4-b2eb-46b0-b1ff-0c436d9ab000 --l2p_dram_limit 10 -c nvc0n1p0 00:25:29.574 [2024-11-20 12:57:54.858156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.858370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:29.574 [2024-11-20 12:57:54.858400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:29.574 [2024-11-20 12:57:54.858410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.858494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.858505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.574 [2024-11-20 12:57:54.858517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:29.574 [2024-11-20 12:57:54.858525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.858553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:29.574 [2024-11-20 12:57:54.859427] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:29.574 [2024-11-20 12:57:54.859460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.859468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.574 [2024-11-20 12:57:54.859480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:25:29.574 [2024-11-20 12:57:54.859488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.859684] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f0a2d148-51f7-42a1-a3b5-778bfb33a11b 00:25:29.574 [2024-11-20 12:57:54.861453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.861721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:29.574 [2024-11-20 12:57:54.861761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:29.574 [2024-11-20 12:57:54.861775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.870386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.870437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.574 [2024-11-20 12:57:54.870450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.553 ms 00:25:29.574 [2024-11-20 12:57:54.870461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.870564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.870576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.574 [2024-11-20 12:57:54.870586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:29.574 [2024-11-20 12:57:54.870599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.870662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.870675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:29.574 [2024-11-20 12:57:54.870684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:29.574 [2024-11-20 12:57:54.870697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.870721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:29.574 [2024-11-20 12:57:54.875078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.574 [2024-11-20 12:57:54.875119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.574 [2024-11-20 12:57:54.875134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.362 ms 00:25:29.574 [2024-11-20 12:57:54.875142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.574 [2024-11-20 12:57:54.875182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.575 [2024-11-20 12:57:54.875190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:29.575 [2024-11-20 12:57:54.875201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:29.575 [2024-11-20 12:57:54.875209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.575 [2024-11-20 12:57:54.875247] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:29.575 [2024-11-20 12:57:54.875393] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:29.575 [2024-11-20 12:57:54.875410] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:29.575 [2024-11-20 12:57:54.875422] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:29.575 [2024-11-20 12:57:54.875435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:29.575 [2024-11-20 12:57:54.875445] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:29.575 [2024-11-20 12:57:54.875456] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:29.575 [2024-11-20 12:57:54.875463] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:29.575 [2024-11-20 12:57:54.875476] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:29.575 [2024-11-20 12:57:54.875484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:29.575 [2024-11-20 12:57:54.875495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.575 [2024-11-20 12:57:54.875503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:29.575 [2024-11-20 12:57:54.875513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:25:29.575 [2024-11-20 12:57:54.875527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.575 [2024-11-20 12:57:54.875614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.575 [2024-11-20 12:57:54.875623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:29.575 [2024-11-20 12:57:54.875633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:29.575 [2024-11-20 12:57:54.875641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.575 [2024-11-20 12:57:54.875774] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:29.575 [2024-11-20 12:57:54.875786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:29.575 [2024-11-20 12:57:54.875797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.575 [2024-11-20 12:57:54.875806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.575 [2024-11-20 12:57:54.875816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:29.575 [2024-11-20 12:57:54.875823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:29.575 [2024-11-20 12:57:54.875831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:29.575 [2024-11-20 12:57:54.875839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:29.575 [2024-11-20 12:57:54.875849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:29.575 [2024-11-20 12:57:54.875855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.575 [2024-11-20 12:57:54.875864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:29.575 [2024-11-20 12:57:54.875883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:29.575 [2024-11-20 12:57:54.875892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.575 [2024-11-20 12:57:54.875900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:29.575 [2024-11-20 12:57:54.875910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:29.575 [2024-11-20 12:57:54.875916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.575 [2024-11-20 12:57:54.875927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:29.575 [2024-11-20 12:57:54.875934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:29.575 [2024-11-20 12:57:54.875945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.575 [2024-11-20 12:57:54.875952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:29.575 [2024-11-20 12:57:54.875961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:29.575 [2024-11-20 12:57:54.875968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.575 [2024-11-20 12:57:54.875976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:29.575 [2024-11-20 12:57:54.875987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:29.575 [2024-11-20 12:57:54.875996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.575 [2024-11-20 12:57:54.876003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:29.575 [2024-11-20 12:57:54.876012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:29.575 [2024-11-20 12:57:54.876019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.575 [2024-11-20 12:57:54.876028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:29.575 [2024-11-20 12:57:54.876034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:29.575 [2024-11-20 12:57:54.876043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.575 [2024-11-20 12:57:54.876050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:29.575 [2024-11-20 12:57:54.876061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:29.575 [2024-11-20 12:57:54.876068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.575 [2024-11-20 12:57:54.876077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:29.575 [2024-11-20 12:57:54.876084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:29.575 [2024-11-20 12:57:54.876092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.575 [2024-11-20 12:57:54.876099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:29.575 [2024-11-20 12:57:54.876107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:29.575 [2024-11-20 12:57:54.876114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.575 [2024-11-20 12:57:54.876123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:29.575 [2024-11-20 12:57:54.876129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:29.575 [2024-11-20 12:57:54.876137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.575 [2024-11-20 12:57:54.876144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:29.575 [2024-11-20 12:57:54.876154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:29.575 [2024-11-20 12:57:54.876161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.575 [2024-11-20 12:57:54.876172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.575 [2024-11-20 12:57:54.876180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:29.575 [2024-11-20 12:57:54.876192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:29.575 [2024-11-20 12:57:54.876198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:29.575 [2024-11-20 12:57:54.876207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:29.575 [2024-11-20 12:57:54.876214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:29.575 [2024-11-20 12:57:54.876222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:29.575 [2024-11-20 12:57:54.876232] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:29.575 [2024-11-20 12:57:54.876243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.575 [2024-11-20 12:57:54.876257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:29.575 [2024-11-20 12:57:54.876267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:29.575 [2024-11-20 12:57:54.876274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:29.575 [2024-11-20 12:57:54.876283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:29.575 [2024-11-20 12:57:54.876291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:29.575 [2024-11-20 12:57:54.876301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:29.575 [2024-11-20 12:57:54.876308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:29.575 [2024-11-20 12:57:54.876317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:29.575 [2024-11-20 12:57:54.876325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:29.575 [2024-11-20 12:57:54.876336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:29.575 [2024-11-20 12:57:54.876343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:29.575 [2024-11-20 12:57:54.876352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:29.575 [2024-11-20 12:57:54.876359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:29.575 [2024-11-20 12:57:54.876370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:29.575 [2024-11-20 12:57:54.876377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:29.575 [2024-11-20 12:57:54.876388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.575 [2024-11-20 12:57:54.876395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:29.575 [2024-11-20 12:57:54.876405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:29.575 [2024-11-20 12:57:54.876412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:29.575 [2024-11-20 12:57:54.876422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:29.575 [2024-11-20 12:57:54.876431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.576 [2024-11-20 12:57:54.876440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:29.576 [2024-11-20 12:57:54.876448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:25:29.576 [2024-11-20 12:57:54.876457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.576 [2024-11-20 12:57:54.876498] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:29.576 [2024-11-20 12:57:54.876512] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:33.787 [2024-11-20 12:57:59.113930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.114028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:33.787 [2024-11-20 12:57:59.114055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4237.414 ms 00:25:33.787 [2024-11-20 12:57:59.114073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.147610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.147686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.787 [2024-11-20 12:57:59.147709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.201 ms 00:25:33.787 [2024-11-20 12:57:59.147724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.147957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.147983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:33.787 [2024-11-20 12:57:59.147999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:33.787 [2024-11-20 12:57:59.148020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.184339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.184580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.787 [2024-11-20 12:57:59.184608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.246 ms 00:25:33.787 [2024-11-20 12:57:59.184622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.184675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.184698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.787 [2024-11-20 12:57:59.184712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:33.787 [2024-11-20 12:57:59.184727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.185419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.185471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.787 [2024-11-20 12:57:59.185486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:25:33.787 [2024-11-20 12:57:59.185500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.185666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.185685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.787 [2024-11-20 12:57:59.185703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:25:33.787 [2024-11-20 12:57:59.185722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.203688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.203944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.787 [2024-11-20 12:57:59.203971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.921 ms 00:25:33.787 [2024-11-20 12:57:59.203987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.787 [2024-11-20 12:57:59.217839] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:33.787 [2024-11-20 12:57:59.222010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.787 [2024-11-20 12:57:59.222060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:33.787 [2024-11-20 12:57:59.222081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.892 ms 00:25:33.787 [2024-11-20 12:57:59.222093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.341349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.341417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:34.049 [2024-11-20 12:57:59.341443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.199 ms 00:25:34.049 [2024-11-20 12:57:59.341456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.341763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.341794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:34.049 [2024-11-20 12:57:59.341816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:25:34.049 [2024-11-20 12:57:59.341830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.368402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.368466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:34.049 [2024-11-20 12:57:59.368491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.483 ms 00:25:34.049 [2024-11-20 12:57:59.368503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.394469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.394525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:34.049 [2024-11-20 12:57:59.394550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.887 ms 00:25:34.049 [2024-11-20 12:57:59.394561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.395335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.395380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:34.049 [2024-11-20 12:57:59.395400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:25:34.049 [2024-11-20 12:57:59.395410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.487628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.487702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:34.049 [2024-11-20 12:57:59.487731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.138 ms 00:25:34.049 [2024-11-20 12:57:59.487777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.516692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.516770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:34.049 [2024-11-20 12:57:59.516794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.690 ms 00:25:34.049 [2024-11-20 12:57:59.516806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.049 [2024-11-20 12:57:59.543541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.049 [2024-11-20 12:57:59.543592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:34.049 [2024-11-20 12:57:59.543608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.663 ms 00:25:34.049 [2024-11-20 12:57:59.543616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.312 [2024-11-20 12:57:59.571005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.313 [2024-11-20 12:57:59.571059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:34.313 [2024-11-20 12:57:59.571076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.327 ms 00:25:34.313 [2024-11-20 12:57:59.571083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.313 [2024-11-20 12:57:59.571145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.313 [2024-11-20 12:57:59.571155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:34.313 [2024-11-20 12:57:59.571169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:34.313 [2024-11-20 12:57:59.571178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.313 [2024-11-20 12:57:59.571292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.313 [2024-11-20 12:57:59.571303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:34.313 [2024-11-20 12:57:59.571317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:34.313 [2024-11-20 12:57:59.571325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.313 [2024-11-20 12:57:59.572611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4713.943 ms, result 0 00:25:34.313 { 00:25:34.313 "name": "ftl0", 00:25:34.313 "uuid": "f0a2d148-51f7-42a1-a3b5-778bfb33a11b" 00:25:34.313 } 00:25:34.313 12:57:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:34.313 12:57:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:34.313 12:57:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:34.313 12:57:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:34.313 12:57:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:34.575 /dev/nbd0 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:34.575 1+0 records in 00:25:34.575 1+0 records out 00:25:34.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374717 s, 10.9 MB/s 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:34.575 12:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:34.834 [2024-11-20 12:58:00.152573] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:25:34.834 [2024-11-20 12:58:00.152961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80307 ] 00:25:34.834 [2024-11-20 12:58:00.314990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.092 [2024-11-20 12:58:00.446450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.479  [2024-11-20T12:58:02.942Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-20T12:58:03.886Z] Copying: 379/1024 [MB] (190 MBps) [2024-11-20T12:58:04.828Z] Copying: 608/1024 [MB] (229 MBps) [2024-11-20T12:58:05.401Z] Copying: 862/1024 [MB] (253 MBps) [2024-11-20T12:58:05.973Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:25:40.454 00:25:40.454 12:58:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:43.051 12:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:43.052 [2024-11-20 12:58:08.052512] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:25:43.052 [2024-11-20 12:58:08.052603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80396 ] 00:25:43.052 [2024-11-20 12:58:08.206531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.052 [2024-11-20 12:58:08.298308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.996  [2024-11-20T12:58:10.902Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-20T12:58:11.847Z] Copying: 71/1024 [MB] (36 MBps) [2024-11-20T12:58:12.791Z] Copying: 106/1024 [MB] (35 MBps) [2024-11-20T12:58:13.734Z] Copying: 139/1024 [MB] (32 MBps) [2024-11-20T12:58:14.676Z] Copying: 174/1024 [MB] (35 MBps) [2024-11-20T12:58:15.620Z] Copying: 206/1024 [MB] (31 MBps) [2024-11-20T12:58:16.564Z] Copying: 237/1024 [MB] (31 MBps) [2024-11-20T12:58:17.951Z] Copying: 270/1024 [MB] (32 MBps) [2024-11-20T12:58:18.522Z] Copying: 301/1024 [MB] (31 MBps) [2024-11-20T12:58:19.908Z] Copying: 331/1024 [MB] (30 MBps) [2024-11-20T12:58:20.849Z] Copying: 364/1024 [MB] (32 MBps) [2024-11-20T12:58:21.822Z] Copying: 394/1024 [MB] (29 MBps) [2024-11-20T12:58:22.765Z] Copying: 425/1024 [MB] (30 MBps) [2024-11-20T12:58:23.712Z] Copying: 460/1024 [MB] (35 MBps) [2024-11-20T12:58:24.657Z] Copying: 492/1024 [MB] (32 MBps) [2024-11-20T12:58:25.597Z] Copying: 523/1024 [MB] (30 MBps) [2024-11-20T12:58:26.535Z] Copying: 554/1024 [MB] (30 MBps) [2024-11-20T12:58:27.917Z] Copying: 584/1024 [MB] (30 MBps) [2024-11-20T12:58:28.860Z] Copying: 615/1024 [MB] (31 MBps) [2024-11-20T12:58:29.802Z] Copying: 647/1024 [MB] (31 MBps) [2024-11-20T12:58:30.744Z] Copying: 680/1024 [MB] (32 MBps) [2024-11-20T12:58:31.686Z] Copying: 716/1024 [MB] (35 MBps) [2024-11-20T12:58:32.629Z] Copying: 746/1024 [MB] (30 MBps) [2024-11-20T12:58:33.573Z] Copying: 780/1024 [MB] (33 MBps) [2024-11-20T12:58:34.517Z] Copying: 811/1024 [MB] (31 MBps) [2024-11-20T12:58:35.904Z] Copying: 845/1024 [MB] (33 MBps) [2024-11-20T12:58:36.849Z] Copying: 880/1024 [MB] (35 MBps) [2024-11-20T12:58:37.791Z] Copying: 916/1024 [MB] (35 MBps) [2024-11-20T12:58:38.736Z] Copying: 944/1024 [MB] (27 MBps) [2024-11-20T12:58:39.697Z] Copying: 976/1024 [MB] (32 MBps) [2024-11-20T12:58:40.266Z] Copying: 1008/1024 [MB] (31 MBps) [2024-11-20T12:58:40.839Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:26:15.320 00:26:15.320 12:58:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:15.320 12:58:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:15.320 12:58:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:15.581 [2024-11-20 12:58:40.983996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:40.984037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:15.581 [2024-11-20 12:58:40.984048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:15.581 [2024-11-20 12:58:40.984056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:40.984075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:15.581 [2024-11-20 12:58:40.986218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:40.986244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:15.581 [2024-11-20 12:58:40.986255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:26:15.581 [2024-11-20 12:58:40.986262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:40.987992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:40.988020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:15.581 [2024-11-20 12:58:40.988030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:26:15.581 [2024-11-20 12:58:40.988036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.001233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.001266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:15.581 [2024-11-20 12:58:41.001277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.180 ms 00:26:15.581 [2024-11-20 12:58:41.001283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.006133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.006156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:15.581 [2024-11-20 12:58:41.006166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.819 ms 00:26:15.581 [2024-11-20 12:58:41.006171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.024814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.024839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:15.581 [2024-11-20 12:58:41.024849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.589 ms 00:26:15.581 [2024-11-20 12:58:41.024855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.036463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.036489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:15.581 [2024-11-20 12:58:41.036499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.577 ms 00:26:15.581 [2024-11-20 12:58:41.036508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.036613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.036620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:15.581 [2024-11-20 12:58:41.036628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:15.581 [2024-11-20 12:58:41.036634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.054300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.054398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:15.581 [2024-11-20 12:58:41.054449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.651 ms 00:26:15.581 [2024-11-20 12:58:41.054467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.071481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.071575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:15.581 [2024-11-20 12:58:41.071590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.979 ms 00:26:15.581 [2024-11-20 12:58:41.071595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.581 [2024-11-20 12:58:41.088456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.581 [2024-11-20 12:58:41.088481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:15.581 [2024-11-20 12:58:41.088490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.834 ms 00:26:15.581 [2024-11-20 12:58:41.088495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.845 [2024-11-20 12:58:41.105502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.845 [2024-11-20 12:58:41.105527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:15.845 [2024-11-20 12:58:41.105536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.949 ms 00:26:15.845 [2024-11-20 12:58:41.105542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.845 [2024-11-20 12:58:41.105569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:15.845 [2024-11-20 12:58:41.105580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.105997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.106004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:15.845 [2024-11-20 12:58:41.106010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:15.846 [2024-11-20 12:58:41.106287] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:15.846 [2024-11-20 12:58:41.106295] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0a2d148-51f7-42a1-a3b5-778bfb33a11b 00:26:15.846 [2024-11-20 12:58:41.106301] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:15.846 [2024-11-20 12:58:41.106309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:15.846 [2024-11-20 12:58:41.106315] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:15.846 [2024-11-20 12:58:41.106324] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:15.846 [2024-11-20 12:58:41.106329] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:15.846 [2024-11-20 12:58:41.106336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:15.846 [2024-11-20 12:58:41.106341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:15.846 [2024-11-20 12:58:41.106347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:15.846 [2024-11-20 12:58:41.106352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:15.846 [2024-11-20 12:58:41.106359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.846 [2024-11-20 12:58:41.106364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:15.846 [2024-11-20 12:58:41.106372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:26:15.846 [2024-11-20 12:58:41.106377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.115896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.846 [2024-11-20 12:58:41.115917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:15.846 [2024-11-20 12:58:41.115928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.493 ms 00:26:15.846 [2024-11-20 12:58:41.115934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.116202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.846 [2024-11-20 12:58:41.116209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:15.846 [2024-11-20 12:58:41.116216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:26:15.846 [2024-11-20 12:58:41.116222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.148694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.148720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.846 [2024-11-20 12:58:41.148729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.148735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.148791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.148798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.846 [2024-11-20 12:58:41.148805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.148811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.148861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.148869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.846 [2024-11-20 12:58:41.148878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.148884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.148900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.148906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.846 [2024-11-20 12:58:41.148913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.148919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.207526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.207560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:15.846 [2024-11-20 12:58:41.207570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.207575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.255181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.255214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:15.846 [2024-11-20 12:58:41.255224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.255229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.255300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.255308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:15.846 [2024-11-20 12:58:41.255316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.255323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.255359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.255366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:15.846 [2024-11-20 12:58:41.255374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.255380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.255449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.255456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:15.846 [2024-11-20 12:58:41.255463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.255469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.846 [2024-11-20 12:58:41.255496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.846 [2024-11-20 12:58:41.255503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:15.846 [2024-11-20 12:58:41.255510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.846 [2024-11-20 12:58:41.255515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.847 [2024-11-20 12:58:41.255544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.847 [2024-11-20 12:58:41.255551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:15.847 [2024-11-20 12:58:41.255559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.847 [2024-11-20 12:58:41.255564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.847 [2024-11-20 12:58:41.255601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.847 [2024-11-20 12:58:41.255608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:15.847 [2024-11-20 12:58:41.255615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.847 [2024-11-20 12:58:41.255621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.847 [2024-11-20 12:58:41.255722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.697 ms, result 0 00:26:15.847 true 00:26:15.847 12:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80151 00:26:15.847 12:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80151 00:26:15.847 12:58:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:15.847 [2024-11-20 12:58:41.330172] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:26:15.847 [2024-11-20 12:58:41.330259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80752 ] 00:26:16.108 [2024-11-20 12:58:41.481058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.108 [2024-11-20 12:58:41.557260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.493  [2024-11-20T12:58:43.957Z] Copying: 254/1024 [MB] (254 MBps) [2024-11-20T12:58:44.902Z] Copying: 510/1024 [MB] (255 MBps) [2024-11-20T12:58:45.845Z] Copying: 765/1024 [MB] (255 MBps) [2024-11-20T12:58:45.845Z] Copying: 1019/1024 [MB] (253 MBps) [2024-11-20T12:58:46.418Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:26:20.899 00:26:20.899 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80151 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:20.899 12:58:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:20.899 [2024-11-20 12:58:46.363074] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:26:20.899 [2024-11-20 12:58:46.363335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80807 ] 00:26:21.160 [2024-11-20 12:58:46.518136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.161 [2024-11-20 12:58:46.595200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.420 [2024-11-20 12:58:46.800508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:21.420 [2024-11-20 12:58:46.800699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:21.420 [2024-11-20 12:58:46.863093] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:21.420 [2024-11-20 12:58:46.863468] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:21.420 [2024-11-20 12:58:46.863768] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:21.679 [2024-11-20 12:58:47.029630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.029774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:21.679 [2024-11-20 12:58:47.029834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:21.679 [2024-11-20 12:58:47.029853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.029909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.029997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.679 [2024-11-20 12:58:47.030036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:21.679 [2024-11-20 12:58:47.030050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.030074] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:21.679 [2024-11-20 12:58:47.030595] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:21.679 [2024-11-20 12:58:47.030669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.030706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.679 [2024-11-20 12:58:47.030724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:26:21.679 [2024-11-20 12:58:47.030746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.031686] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:21.679 [2024-11-20 12:58:47.041365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.041461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:21.679 [2024-11-20 12:58:47.041501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.681 ms 00:26:21.679 [2024-11-20 12:58:47.041518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.041562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.041643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:21.679 [2024-11-20 12:58:47.041662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:21.679 [2024-11-20 12:58:47.041676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.045981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.046061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.679 [2024-11-20 12:58:47.046099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:26:21.679 [2024-11-20 12:58:47.046115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.046178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.046298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.679 [2024-11-20 12:58:47.046316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:21.679 [2024-11-20 12:58:47.046331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.046372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.046395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:21.679 [2024-11-20 12:58:47.046409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:21.679 [2024-11-20 12:58:47.046455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.046484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:21.679 [2024-11-20 12:58:47.049047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.049129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.679 [2024-11-20 12:58:47.049167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:26:21.679 [2024-11-20 12:58:47.049183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.049218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.049240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:21.679 [2024-11-20 12:58:47.049283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:21.679 [2024-11-20 12:58:47.049299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.049323] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:21.679 [2024-11-20 12:58:47.049349] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:21.679 [2024-11-20 12:58:47.049412] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:21.679 [2024-11-20 12:58:47.049492] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:21.679 [2024-11-20 12:58:47.049588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:21.679 [2024-11-20 12:58:47.049636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:21.679 [2024-11-20 12:58:47.049662] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:21.679 [2024-11-20 12:58:47.049686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:21.679 [2024-11-20 12:58:47.049732] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:21.679 [2024-11-20 12:58:47.049766] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:21.679 [2024-11-20 12:58:47.049781] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:21.679 [2024-11-20 12:58:47.049817] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:21.679 [2024-11-20 12:58:47.049834] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:21.679 [2024-11-20 12:58:47.049849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.049863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:21.679 [2024-11-20 12:58:47.049897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:26:21.679 [2024-11-20 12:58:47.049913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.049991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.679 [2024-11-20 12:58:47.050011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:21.679 [2024-11-20 12:58:47.050026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:21.679 [2024-11-20 12:58:47.050058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.679 [2024-11-20 12:58:47.050147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:21.679 [2024-11-20 12:58:47.050219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:21.679 [2024-11-20 12:58:47.050237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.679 [2024-11-20 12:58:47.050252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.679 [2024-11-20 12:58:47.050266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:21.679 [2024-11-20 12:58:47.050280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:21.679 [2024-11-20 12:58:47.050294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:21.679 [2024-11-20 12:58:47.050308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:21.679 [2024-11-20 12:58:47.050322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:21.679 [2024-11-20 12:58:47.050356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.679 [2024-11-20 12:58:47.050419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:21.680 [2024-11-20 12:58:47.050442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:21.680 [2024-11-20 12:58:47.050475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.680 [2024-11-20 12:58:47.050491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:21.680 [2024-11-20 12:58:47.050505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:21.680 [2024-11-20 12:58:47.050519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.680 [2024-11-20 12:58:47.050533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:21.680 [2024-11-20 12:58:47.050571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:21.680 [2024-11-20 12:58:47.050587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.680 [2024-11-20 12:58:47.050601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:21.680 [2024-11-20 12:58:47.050615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:21.680 [2024-11-20 12:58:47.050629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.680 [2024-11-20 12:58:47.050658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:21.680 [2024-11-20 12:58:47.050675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:21.680 [2024-11-20 12:58:47.050688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.680 [2024-11-20 12:58:47.050723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:21.680 [2024-11-20 12:58:47.050747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:21.680 [2024-11-20 12:58:47.050762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.680 [2024-11-20 12:58:47.050778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:21.680 [2024-11-20 12:58:47.050810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:21.680 [2024-11-20 12:58:47.050825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.680 [2024-11-20 12:58:47.050839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:21.680 [2024-11-20 12:58:47.050853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:21.680 [2024-11-20 12:58:47.050891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.680 [2024-11-20 12:58:47.050907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:21.680 [2024-11-20 12:58:47.050921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:21.680 [2024-11-20 12:58:47.050935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.680 [2024-11-20 12:58:47.050966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:21.680 [2024-11-20 12:58:47.050983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:21.680 [2024-11-20 12:58:47.050996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.680 [2024-11-20 12:58:47.051010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:21.680 [2024-11-20 12:58:47.051024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:21.680 [2024-11-20 12:58:47.051102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.680 [2024-11-20 12:58:47.051118] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:21.680 [2024-11-20 12:58:47.051136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:21.680 [2024-11-20 12:58:47.051151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.680 [2024-11-20 12:58:47.051168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.680 [2024-11-20 12:58:47.051183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:21.680 [2024-11-20 12:58:47.051197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:21.680 [2024-11-20 12:58:47.051211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:21.680 [2024-11-20 12:58:47.051225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:21.680 [2024-11-20 12:58:47.051264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:21.680 [2024-11-20 12:58:47.051281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:21.680 [2024-11-20 12:58:47.051296] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:21.680 [2024-11-20 12:58:47.051319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.680 [2024-11-20 12:58:47.051342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:21.680 [2024-11-20 12:58:47.051364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:21.680 [2024-11-20 12:58:47.051385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:21.680 [2024-11-20 12:58:47.051437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:21.680 [2024-11-20 12:58:47.051459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:21.680 [2024-11-20 12:58:47.051480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:21.680 [2024-11-20 12:58:47.051502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:21.680 [2024-11-20 12:58:47.051523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:21.680 [2024-11-20 12:58:47.051545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:21.680 [2024-11-20 12:58:47.051595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:21.680 [2024-11-20 12:58:47.051617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:21.680 [2024-11-20 12:58:47.051639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:21.680 [2024-11-20 12:58:47.051660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:21.680 [2024-11-20 12:58:47.051682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:21.680 [2024-11-20 12:58:47.051704] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:21.680 [2024-11-20 12:58:47.051759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.680 [2024-11-20 12:58:47.051767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:21.680 [2024-11-20 12:58:47.051773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:21.680 [2024-11-20 12:58:47.051779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:21.680 [2024-11-20 12:58:47.051785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:21.680 [2024-11-20 12:58:47.051791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.051802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:21.680 [2024-11-20 12:58:47.051808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:26:21.680 [2024-11-20 12:58:47.051813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.072649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.072771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.680 [2024-11-20 12:58:47.072814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.783 ms 00:26:21.680 [2024-11-20 12:58:47.072831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.072909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.072929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:21.680 [2024-11-20 12:58:47.072944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:21.680 [2024-11-20 12:58:47.072958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.113317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.113451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.680 [2024-11-20 12:58:47.113733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.304 ms 00:26:21.680 [2024-11-20 12:58:47.113781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.113871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.113893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.680 [2024-11-20 12:58:47.113912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:21.680 [2024-11-20 12:58:47.113949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.114287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.114362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.680 [2024-11-20 12:58:47.114404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:26:21.680 [2024-11-20 12:58:47.114422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.114566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.114608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.680 [2024-11-20 12:58:47.114657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:26:21.680 [2024-11-20 12:58:47.114675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.125121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.680 [2024-11-20 12:58:47.125206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.680 [2024-11-20 12:58:47.125257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.422 ms 00:26:21.680 [2024-11-20 12:58:47.125266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.680 [2024-11-20 12:58:47.134939] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:21.680 [2024-11-20 12:58:47.134964] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:21.680 [2024-11-20 12:58:47.134974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.681 [2024-11-20 12:58:47.134980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:21.681 [2024-11-20 12:58:47.134987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.616 ms 00:26:21.681 [2024-11-20 12:58:47.134993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.681 [2024-11-20 12:58:47.153844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.681 [2024-11-20 12:58:47.153874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:21.681 [2024-11-20 12:58:47.153890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.820 ms 00:26:21.681 [2024-11-20 12:58:47.153898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.681 [2024-11-20 12:58:47.162877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.681 [2024-11-20 12:58:47.162901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:21.681 [2024-11-20 12:58:47.162910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.936 ms 00:26:21.681 [2024-11-20 12:58:47.162915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.681 [2024-11-20 12:58:47.171591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.681 [2024-11-20 12:58:47.171677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:21.681 [2024-11-20 12:58:47.171716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.649 ms 00:26:21.681 [2024-11-20 12:58:47.171733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.681 [2024-11-20 12:58:47.172234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.681 [2024-11-20 12:58:47.172303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:21.681 [2024-11-20 12:58:47.172341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:26:21.681 [2024-11-20 12:58:47.172358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.939 [2024-11-20 12:58:47.215783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.939 [2024-11-20 12:58:47.215955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:21.939 [2024-11-20 12:58:47.215999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.400 ms 00:26:21.939 [2024-11-20 12:58:47.216017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.939 [2024-11-20 12:58:47.223882] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:21.939 [2024-11-20 12:58:47.226081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.939 [2024-11-20 12:58:47.226165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:21.939 [2024-11-20 12:58:47.226203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.021 ms 00:26:21.939 [2024-11-20 12:58:47.226220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.939 [2024-11-20 12:58:47.226301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.939 [2024-11-20 12:58:47.226322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:21.939 [2024-11-20 12:58:47.226338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:21.939 [2024-11-20 12:58:47.226353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.939 [2024-11-20 12:58:47.226460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.939 [2024-11-20 12:58:47.226483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:21.940 [2024-11-20 12:58:47.226499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:21.940 [2024-11-20 12:58:47.226513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.940 [2024-11-20 12:58:47.226538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.940 [2024-11-20 12:58:47.226611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:21.940 [2024-11-20 12:58:47.226626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:21.940 [2024-11-20 12:58:47.226640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.940 [2024-11-20 12:58:47.226673] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:21.940 [2024-11-20 12:58:47.226721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.940 [2024-11-20 12:58:47.226758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:21.940 [2024-11-20 12:58:47.226826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:21.940 [2024-11-20 12:58:47.226843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.940 [2024-11-20 12:58:47.244644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.940 [2024-11-20 12:58:47.244745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:21.940 [2024-11-20 12:58:47.244786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.770 ms 00:26:21.940 [2024-11-20 12:58:47.244803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.940 [2024-11-20 12:58:47.244862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.940 [2024-11-20 12:58:47.244881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:21.940 [2024-11-20 12:58:47.244897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:21.940 [2024-11-20 12:58:47.244912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.940 [2024-11-20 12:58:47.245959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.982 ms, result 0 00:26:22.874  [2024-11-20T12:58:49.339Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T12:58:50.282Z] Copying: 55/1024 [MB] (28 MBps) [2024-11-20T12:58:51.668Z] Copying: 70/1024 [MB] (14 MBps) [2024-11-20T12:58:52.614Z] Copying: 86/1024 [MB] (16 MBps) [2024-11-20T12:58:53.557Z] Copying: 103/1024 [MB] (17 MBps) [2024-11-20T12:58:54.555Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-20T12:58:55.510Z] Copying: 147/1024 [MB] (18 MBps) [2024-11-20T12:58:56.451Z] Copying: 165/1024 [MB] (17 MBps) [2024-11-20T12:58:57.393Z] Copying: 185/1024 [MB] (20 MBps) [2024-11-20T12:58:58.334Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-20T12:58:59.278Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-20T12:59:00.670Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-20T12:59:01.614Z] Copying: 230/1024 [MB] (13 MBps) [2024-11-20T12:59:02.558Z] Copying: 244/1024 [MB] (14 MBps) [2024-11-20T12:59:03.527Z] Copying: 257/1024 [MB] (12 MBps) [2024-11-20T12:59:04.474Z] Copying: 301/1024 [MB] (44 MBps) [2024-11-20T12:59:05.419Z] Copying: 355/1024 [MB] (53 MBps) [2024-11-20T12:59:06.362Z] Copying: 408/1024 [MB] (53 MBps) [2024-11-20T12:59:07.304Z] Copying: 460/1024 [MB] (51 MBps) [2024-11-20T12:59:08.692Z] Copying: 481/1024 [MB] (21 MBps) [2024-11-20T12:59:09.265Z] Copying: 501/1024 [MB] (20 MBps) [2024-11-20T12:59:10.690Z] Copying: 518/1024 [MB] (16 MBps) [2024-11-20T12:59:11.262Z] Copying: 532/1024 [MB] (14 MBps) [2024-11-20T12:59:12.651Z] Copying: 551/1024 [MB] (18 MBps) [2024-11-20T12:59:13.598Z] Copying: 567/1024 [MB] (16 MBps) [2024-11-20T12:59:14.544Z] Copying: 583/1024 [MB] (16 MBps) [2024-11-20T12:59:15.489Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-20T12:59:16.432Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-20T12:59:17.378Z] Copying: 614/1024 [MB] (10 MBps) [2024-11-20T12:59:18.321Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-20T12:59:19.263Z] Copying: 638/1024 [MB] (13 MBps) [2024-11-20T12:59:20.649Z] Copying: 656/1024 [MB] (17 MBps) [2024-11-20T12:59:21.593Z] Copying: 673/1024 [MB] (16 MBps) [2024-11-20T12:59:22.537Z] Copying: 686/1024 [MB] (13 MBps) [2024-11-20T12:59:23.481Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-20T12:59:24.426Z] Copying: 719/1024 [MB] (18 MBps) [2024-11-20T12:59:25.369Z] Copying: 733/1024 [MB] (13 MBps) [2024-11-20T12:59:26.338Z] Copying: 759/1024 [MB] (25 MBps) [2024-11-20T12:59:27.312Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-20T12:59:28.698Z] Copying: 783/1024 [MB] (12 MBps) [2024-11-20T12:59:29.268Z] Copying: 800/1024 [MB] (17 MBps) [2024-11-20T12:59:30.652Z] Copying: 821/1024 [MB] (20 MBps) [2024-11-20T12:59:31.593Z] Copying: 840/1024 [MB] (19 MBps) [2024-11-20T12:59:32.538Z] Copying: 858/1024 [MB] (18 MBps) [2024-11-20T12:59:33.480Z] Copying: 872/1024 [MB] (13 MBps) [2024-11-20T12:59:34.424Z] Copying: 894/1024 [MB] (21 MBps) [2024-11-20T12:59:35.368Z] Copying: 913/1024 [MB] (19 MBps) [2024-11-20T12:59:36.311Z] Copying: 931/1024 [MB] (17 MBps) [2024-11-20T12:59:37.692Z] Copying: 945/1024 [MB] (14 MBps) [2024-11-20T12:59:38.265Z] Copying: 960/1024 [MB] (14 MBps) [2024-11-20T12:59:39.654Z] Copying: 971/1024 [MB] (10 MBps) [2024-11-20T12:59:40.597Z] Copying: 984/1024 [MB] (13 MBps) [2024-11-20T12:59:41.542Z] Copying: 999/1024 [MB] (15 MBps) [2024-11-20T12:59:42.513Z] Copying: 1015/1024 [MB] (15 MBps) [2024-11-20T12:59:42.782Z] Copying: 1048184/1048576 [kB] (8216 kBps) [2024-11-20T12:59:42.782Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-20 12:59:42.643668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.263 [2024-11-20 12:59:42.643753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:17.263 [2024-11-20 12:59:42.643771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:17.263 [2024-11-20 12:59:42.643780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.263 [2024-11-20 12:59:42.643805] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:17.263 [2024-11-20 12:59:42.646872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.263 [2024-11-20 12:59:42.646908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:17.263 [2024-11-20 12:59:42.646920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:27:17.263 [2024-11-20 12:59:42.646929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.263 [2024-11-20 12:59:42.658139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.263 [2024-11-20 12:59:42.658184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:17.263 [2024-11-20 12:59:42.658197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.448 ms 00:27:17.263 [2024-11-20 12:59:42.658205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.263 [2024-11-20 12:59:42.681833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.263 [2024-11-20 12:59:42.681880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:17.263 [2024-11-20 12:59:42.681893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.610 ms 00:27:17.263 [2024-11-20 12:59:42.681901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.263 [2024-11-20 12:59:42.688055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.263 [2024-11-20 12:59:42.688100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:17.263 [2024-11-20 12:59:42.688111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:27:17.263 [2024-11-20 12:59:42.688119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.263 [2024-11-20 12:59:42.714172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.263 [2024-11-20 12:59:42.714222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:17.263 [2024-11-20 12:59:42.714235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.010 ms 00:27:17.263 [2024-11-20 12:59:42.714243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.263 [2024-11-20 12:59:42.729807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.263 [2024-11-20 12:59:42.729852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:17.263 [2024-11-20 12:59:42.729865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.520 ms 00:27:17.263 [2024-11-20 12:59:42.729874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.527 [2024-11-20 12:59:42.865668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.527 [2024-11-20 12:59:42.865733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:17.527 [2024-11-20 12:59:42.865764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 135.746 ms 00:27:17.527 [2024-11-20 12:59:42.865779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.527 [2024-11-20 12:59:42.890601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.527 [2024-11-20 12:59:42.890645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:17.527 [2024-11-20 12:59:42.890657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.806 ms 00:27:17.527 [2024-11-20 12:59:42.890664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.527 [2024-11-20 12:59:42.915662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.527 [2024-11-20 12:59:42.915707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:17.527 [2024-11-20 12:59:42.915719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.956 ms 00:27:17.527 [2024-11-20 12:59:42.915727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.527 [2024-11-20 12:59:42.939887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.527 [2024-11-20 12:59:42.939931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:17.527 [2024-11-20 12:59:42.939943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.101 ms 00:27:17.527 [2024-11-20 12:59:42.939951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.527 [2024-11-20 12:59:42.964036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.527 [2024-11-20 12:59:42.964079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:17.527 [2024-11-20 12:59:42.964090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.017 ms 00:27:17.527 [2024-11-20 12:59:42.964098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.527 [2024-11-20 12:59:42.964138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:17.527 [2024-11-20 12:59:42.964155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102400 / 261120 wr_cnt: 1 state: open 00:27:17.527 [2024-11-20 12:59:42.964166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:17.527 [2024-11-20 12:59:42.964640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:17.528 [2024-11-20 12:59:42.964991] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:17.528 [2024-11-20 12:59:42.964999] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0a2d148-51f7-42a1-a3b5-778bfb33a11b 00:27:17.528 [2024-11-20 12:59:42.965009] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102400 00:27:17.528 [2024-11-20 12:59:42.965020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103360 00:27:17.528 [2024-11-20 12:59:42.965035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102400 00:27:17.528 [2024-11-20 12:59:42.965044] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:27:17.528 [2024-11-20 12:59:42.965052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:17.528 [2024-11-20 12:59:42.965061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:17.528 [2024-11-20 12:59:42.965069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:17.528 [2024-11-20 12:59:42.965089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:17.528 [2024-11-20 12:59:42.965096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:17.528 [2024-11-20 12:59:42.965104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.528 [2024-11-20 12:59:42.965113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:17.528 [2024-11-20 12:59:42.965121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:27:17.528 [2024-11-20 12:59:42.965129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.528 [2024-11-20 12:59:42.978509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.528 [2024-11-20 12:59:42.978690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:17.528 [2024-11-20 12:59:42.978709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.360 ms 00:27:17.528 [2024-11-20 12:59:42.978718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.528 [2024-11-20 12:59:42.979138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.528 [2024-11-20 12:59:42.979159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:17.528 [2024-11-20 12:59:42.979167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:27:17.528 [2024-11-20 12:59:42.979175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.528 [2024-11-20 12:59:43.015007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.528 [2024-11-20 12:59:43.015055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:17.528 [2024-11-20 12:59:43.015068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.528 [2024-11-20 12:59:43.015079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.528 [2024-11-20 12:59:43.015143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.528 [2024-11-20 12:59:43.015153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:17.528 [2024-11-20 12:59:43.015163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.528 [2024-11-20 12:59:43.015172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.528 [2024-11-20 12:59:43.015237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.528 [2024-11-20 12:59:43.015249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:17.528 [2024-11-20 12:59:43.015259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.528 [2024-11-20 12:59:43.015268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.528 [2024-11-20 12:59:43.015285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.528 [2024-11-20 12:59:43.015295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:17.528 [2024-11-20 12:59:43.015304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.528 [2024-11-20 12:59:43.015313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.789 [2024-11-20 12:59:43.100471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.789 [2024-11-20 12:59:43.100526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:17.789 [2024-11-20 12:59:43.100540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.789 [2024-11-20 12:59:43.100549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.789 [2024-11-20 12:59:43.170495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.790 [2024-11-20 12:59:43.170556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:17.790 [2024-11-20 12:59:43.170568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.790 [2024-11-20 12:59:43.170578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.790 [2024-11-20 12:59:43.170671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.790 [2024-11-20 12:59:43.170681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:17.790 [2024-11-20 12:59:43.170690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.790 [2024-11-20 12:59:43.170699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.790 [2024-11-20 12:59:43.170765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.790 [2024-11-20 12:59:43.170777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:17.790 [2024-11-20 12:59:43.170786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.790 [2024-11-20 12:59:43.170794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.790 [2024-11-20 12:59:43.170896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.790 [2024-11-20 12:59:43.170912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:17.790 [2024-11-20 12:59:43.170921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.790 [2024-11-20 12:59:43.170929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.790 [2024-11-20 12:59:43.170960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.790 [2024-11-20 12:59:43.170970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:17.790 [2024-11-20 12:59:43.170978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.790 [2024-11-20 12:59:43.170987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.790 [2024-11-20 12:59:43.171030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.790 [2024-11-20 12:59:43.171042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:17.790 [2024-11-20 12:59:43.171051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.790 [2024-11-20 12:59:43.171059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.790 [2024-11-20 12:59:43.171109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.790 [2024-11-20 12:59:43.171120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:17.790 [2024-11-20 12:59:43.171129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.790 [2024-11-20 12:59:43.171137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.790 [2024-11-20 12:59:43.171272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.599 ms, result 0 00:27:19.704 00:27:19.704 00:27:19.704 12:59:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:21.613 12:59:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:21.613 [2024-11-20 12:59:46.930418] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:27:21.613 [2024-11-20 12:59:46.930502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81419 ] 00:27:21.613 [2024-11-20 12:59:47.086082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.874 [2024-11-20 12:59:47.182335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.136 [2024-11-20 12:59:47.446695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.136 [2024-11-20 12:59:47.446801] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.136 [2024-11-20 12:59:47.608008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.136 [2024-11-20 12:59:47.608268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:22.136 [2024-11-20 12:59:47.608301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:22.136 [2024-11-20 12:59:47.608311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.136 [2024-11-20 12:59:47.608378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.136 [2024-11-20 12:59:47.608389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.136 [2024-11-20 12:59:47.608401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:22.137 [2024-11-20 12:59:47.608409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.608430] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:22.137 [2024-11-20 12:59:47.609136] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:22.137 [2024-11-20 12:59:47.609172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.609182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.137 [2024-11-20 12:59:47.609191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:27:22.137 [2024-11-20 12:59:47.609199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.610956] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:22.137 [2024-11-20 12:59:47.625209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.625264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:22.137 [2024-11-20 12:59:47.625278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.255 ms 00:27:22.137 [2024-11-20 12:59:47.625286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.625370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.625381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:22.137 [2024-11-20 12:59:47.625390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:22.137 [2024-11-20 12:59:47.625398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.633722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.633780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.137 [2024-11-20 12:59:47.633791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.242 ms 00:27:22.137 [2024-11-20 12:59:47.633806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.633907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.633917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.137 [2024-11-20 12:59:47.633927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:22.137 [2024-11-20 12:59:47.633935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.633979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.633990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:22.137 [2024-11-20 12:59:47.633998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:22.137 [2024-11-20 12:59:47.634006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.634033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:22.137 [2024-11-20 12:59:47.638087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.638131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.137 [2024-11-20 12:59:47.638145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:27:22.137 [2024-11-20 12:59:47.638154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.638189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.638198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:22.137 [2024-11-20 12:59:47.638206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:22.137 [2024-11-20 12:59:47.638214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.638267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:22.137 [2024-11-20 12:59:47.638291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:22.137 [2024-11-20 12:59:47.638327] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:22.137 [2024-11-20 12:59:47.638346] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:22.137 [2024-11-20 12:59:47.638452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:22.137 [2024-11-20 12:59:47.638462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:22.137 [2024-11-20 12:59:47.638473] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:22.137 [2024-11-20 12:59:47.638484] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:22.137 [2024-11-20 12:59:47.638494] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:22.137 [2024-11-20 12:59:47.638502] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:22.137 [2024-11-20 12:59:47.638511] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:22.137 [2024-11-20 12:59:47.638518] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:22.137 [2024-11-20 12:59:47.638529] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:22.137 [2024-11-20 12:59:47.638537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.638545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:22.137 [2024-11-20 12:59:47.638553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:27:22.137 [2024-11-20 12:59:47.638560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.638643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.137 [2024-11-20 12:59:47.638652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:22.137 [2024-11-20 12:59:47.638659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:22.137 [2024-11-20 12:59:47.638667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.137 [2024-11-20 12:59:47.638793] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:22.137 [2024-11-20 12:59:47.638807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:22.137 [2024-11-20 12:59:47.638815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.137 [2024-11-20 12:59:47.638823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:22.137 [2024-11-20 12:59:47.638837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:22.137 [2024-11-20 12:59:47.638851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:22.137 [2024-11-20 12:59:47.638858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.137 [2024-11-20 12:59:47.638873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:22.137 [2024-11-20 12:59:47.638880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:22.137 [2024-11-20 12:59:47.638887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.137 [2024-11-20 12:59:47.638894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:22.137 [2024-11-20 12:59:47.638902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:22.137 [2024-11-20 12:59:47.638915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:22.137 [2024-11-20 12:59:47.638929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:22.137 [2024-11-20 12:59:47.638936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:22.137 [2024-11-20 12:59:47.638950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.137 [2024-11-20 12:59:47.638964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:22.137 [2024-11-20 12:59:47.638971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.137 [2024-11-20 12:59:47.638986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:22.137 [2024-11-20 12:59:47.638993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:22.137 [2024-11-20 12:59:47.638999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.137 [2024-11-20 12:59:47.639006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:22.137 [2024-11-20 12:59:47.639013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:22.137 [2024-11-20 12:59:47.639020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.137 [2024-11-20 12:59:47.639027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:22.137 [2024-11-20 12:59:47.639034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:22.137 [2024-11-20 12:59:47.639040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.137 [2024-11-20 12:59:47.639047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:22.137 [2024-11-20 12:59:47.639054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:22.137 [2024-11-20 12:59:47.639061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.137 [2024-11-20 12:59:47.639067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:22.137 [2024-11-20 12:59:47.639074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:22.137 [2024-11-20 12:59:47.639081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.137 [2024-11-20 12:59:47.639088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:22.137 [2024-11-20 12:59:47.639094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:22.137 [2024-11-20 12:59:47.639107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.137 [2024-11-20 12:59:47.639115] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:22.137 [2024-11-20 12:59:47.639124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:22.137 [2024-11-20 12:59:47.639131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.138 [2024-11-20 12:59:47.639139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.138 [2024-11-20 12:59:47.639147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:22.138 [2024-11-20 12:59:47.639153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:22.138 [2024-11-20 12:59:47.639160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:22.138 [2024-11-20 12:59:47.639166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:22.138 [2024-11-20 12:59:47.639172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:22.138 [2024-11-20 12:59:47.639180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:22.138 [2024-11-20 12:59:47.639188] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:22.138 [2024-11-20 12:59:47.639198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.138 [2024-11-20 12:59:47.639208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:22.138 [2024-11-20 12:59:47.639215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:22.138 [2024-11-20 12:59:47.639223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:22.138 [2024-11-20 12:59:47.639232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:22.138 [2024-11-20 12:59:47.639240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:22.138 [2024-11-20 12:59:47.639247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:22.138 [2024-11-20 12:59:47.639254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:22.138 [2024-11-20 12:59:47.639262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:22.138 [2024-11-20 12:59:47.639270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:22.138 [2024-11-20 12:59:47.639277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:22.138 [2024-11-20 12:59:47.639285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:22.138 [2024-11-20 12:59:47.639293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:22.138 [2024-11-20 12:59:47.639300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:22.138 [2024-11-20 12:59:47.639308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:22.138 [2024-11-20 12:59:47.639315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:22.138 [2024-11-20 12:59:47.639324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.138 [2024-11-20 12:59:47.639331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:22.138 [2024-11-20 12:59:47.639339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:22.138 [2024-11-20 12:59:47.639346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:22.138 [2024-11-20 12:59:47.639356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:22.138 [2024-11-20 12:59:47.639364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.138 [2024-11-20 12:59:47.639372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:22.138 [2024-11-20 12:59:47.639380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:27:22.138 [2024-11-20 12:59:47.639387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.671579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.671632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.400 [2024-11-20 12:59:47.671644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.146 ms 00:27:22.400 [2024-11-20 12:59:47.671656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.671770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.671780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:22.400 [2024-11-20 12:59:47.671788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:22.400 [2024-11-20 12:59:47.671796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.715363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.715420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.400 [2024-11-20 12:59:47.715434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.503 ms 00:27:22.400 [2024-11-20 12:59:47.715443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.715496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.715507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.400 [2024-11-20 12:59:47.715519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:22.400 [2024-11-20 12:59:47.715528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.716223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.716249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.400 [2024-11-20 12:59:47.716260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:27:22.400 [2024-11-20 12:59:47.716269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.716425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.716443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.400 [2024-11-20 12:59:47.716457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:27:22.400 [2024-11-20 12:59:47.716465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.732114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.732159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.400 [2024-11-20 12:59:47.732170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.627 ms 00:27:22.400 [2024-11-20 12:59:47.732179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.746225] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:22.400 [2024-11-20 12:59:47.746275] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:22.400 [2024-11-20 12:59:47.746289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.746298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:22.400 [2024-11-20 12:59:47.746308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.999 ms 00:27:22.400 [2024-11-20 12:59:47.746316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.772312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.772364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:22.400 [2024-11-20 12:59:47.772378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.940 ms 00:27:22.400 [2024-11-20 12:59:47.772386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.785225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.785283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:22.400 [2024-11-20 12:59:47.785296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.782 ms 00:27:22.400 [2024-11-20 12:59:47.785303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.798244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.798443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:22.400 [2024-11-20 12:59:47.798466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.892 ms 00:27:22.400 [2024-11-20 12:59:47.798474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.799165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.799201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:22.400 [2024-11-20 12:59:47.799215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:27:22.400 [2024-11-20 12:59:47.799223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.864972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.865199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:22.400 [2024-11-20 12:59:47.865233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.728 ms 00:27:22.400 [2024-11-20 12:59:47.865242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.877061] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:22.400 [2024-11-20 12:59:47.880445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.880623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:22.400 [2024-11-20 12:59:47.880643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.062 ms 00:27:22.400 [2024-11-20 12:59:47.880653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.880775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.880788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:22.400 [2024-11-20 12:59:47.880798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:22.400 [2024-11-20 12:59:47.880810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.882533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.882583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:22.400 [2024-11-20 12:59:47.882595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:27:22.400 [2024-11-20 12:59:47.882603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.882639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.882650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:22.400 [2024-11-20 12:59:47.882658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:22.400 [2024-11-20 12:59:47.882667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.882711] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:22.400 [2024-11-20 12:59:47.882723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.882732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:22.400 [2024-11-20 12:59:47.882755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:22.400 [2024-11-20 12:59:47.882764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.908474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.908527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:22.400 [2024-11-20 12:59:47.908547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.690 ms 00:27:22.400 [2024-11-20 12:59:47.908556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.908649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.400 [2024-11-20 12:59:47.908660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:22.400 [2024-11-20 12:59:47.908670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:22.400 [2024-11-20 12:59:47.908679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.400 [2024-11-20 12:59:47.910086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.562 ms, result 0 00:27:23.786  [2024-11-20T12:59:50.249Z] Copying: 1016/1048576 [kB] (1016 kBps) [2024-11-20T12:59:51.195Z] Copying: 4252/1048576 [kB] (3236 kBps) [2024-11-20T12:59:52.141Z] Copying: 20/1024 [MB] (16 MBps) [2024-11-20T12:59:53.528Z] Copying: 41/1024 [MB] (21 MBps) [2024-11-20T12:59:54.102Z] Copying: 73/1024 [MB] (32 MBps) [2024-11-20T12:59:55.492Z] Copying: 97/1024 [MB] (23 MBps) [2024-11-20T12:59:56.435Z] Copying: 125/1024 [MB] (28 MBps) [2024-11-20T12:59:57.377Z] Copying: 147/1024 [MB] (21 MBps) [2024-11-20T12:59:58.421Z] Copying: 170/1024 [MB] (23 MBps) [2024-11-20T12:59:59.418Z] Copying: 199/1024 [MB] (28 MBps) [2024-11-20T13:00:00.365Z] Copying: 225/1024 [MB] (26 MBps) [2024-11-20T13:00:01.310Z] Copying: 248/1024 [MB] (22 MBps) [2024-11-20T13:00:02.255Z] Copying: 278/1024 [MB] (30 MBps) [2024-11-20T13:00:03.202Z] Copying: 303/1024 [MB] (24 MBps) [2024-11-20T13:00:04.148Z] Copying: 339/1024 [MB] (36 MBps) [2024-11-20T13:00:05.095Z] Copying: 366/1024 [MB] (27 MBps) [2024-11-20T13:00:06.485Z] Copying: 396/1024 [MB] (30 MBps) [2024-11-20T13:00:07.429Z] Copying: 425/1024 [MB] (28 MBps) [2024-11-20T13:00:08.372Z] Copying: 454/1024 [MB] (28 MBps) [2024-11-20T13:00:09.333Z] Copying: 483/1024 [MB] (29 MBps) [2024-11-20T13:00:10.290Z] Copying: 502/1024 [MB] (19 MBps) [2024-11-20T13:00:11.234Z] Copying: 527/1024 [MB] (25 MBps) [2024-11-20T13:00:12.177Z] Copying: 544/1024 [MB] (17 MBps) [2024-11-20T13:00:13.120Z] Copying: 578/1024 [MB] (34 MBps) [2024-11-20T13:00:14.523Z] Copying: 597/1024 [MB] (18 MBps) [2024-11-20T13:00:15.109Z] Copying: 622/1024 [MB] (25 MBps) [2024-11-20T13:00:16.492Z] Copying: 652/1024 [MB] (29 MBps) [2024-11-20T13:00:17.432Z] Copying: 676/1024 [MB] (24 MBps) [2024-11-20T13:00:18.376Z] Copying: 719/1024 [MB] (43 MBps) [2024-11-20T13:00:19.320Z] Copying: 748/1024 [MB] (28 MBps) [2024-11-20T13:00:20.262Z] Copying: 775/1024 [MB] (26 MBps) [2024-11-20T13:00:21.205Z] Copying: 804/1024 [MB] (29 MBps) [2024-11-20T13:00:22.150Z] Copying: 834/1024 [MB] (29 MBps) [2024-11-20T13:00:23.537Z] Copying: 855/1024 [MB] (21 MBps) [2024-11-20T13:00:24.110Z] Copying: 881/1024 [MB] (25 MBps) [2024-11-20T13:00:25.495Z] Copying: 912/1024 [MB] (31 MBps) [2024-11-20T13:00:26.434Z] Copying: 950/1024 [MB] (38 MBps) [2024-11-20T13:00:27.377Z] Copying: 980/1024 [MB] (29 MBps) [2024-11-20T13:00:27.377Z] Copying: 1011/1024 [MB] (31 MBps) [2024-11-20T13:00:27.638Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 13:00:27.494453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.494760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:02.119 [2024-11-20 13:00:27.494785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:02.119 [2024-11-20 13:00:27.494796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.494832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:02.119 [2024-11-20 13:00:27.498440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.498581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:02.119 [2024-11-20 13:00:27.498601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.588 ms 00:28:02.119 [2024-11-20 13:00:27.498611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.498905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.498923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:02.119 [2024-11-20 13:00:27.498934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:28:02.119 [2024-11-20 13:00:27.498943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.513183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.513225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:02.119 [2024-11-20 13:00:27.513236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.222 ms 00:28:02.119 [2024-11-20 13:00:27.513245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.519572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.519606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:02.119 [2024-11-20 13:00:27.519627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.301 ms 00:28:02.119 [2024-11-20 13:00:27.519634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.545126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.545168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:02.119 [2024-11-20 13:00:27.545180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.439 ms 00:28:02.119 [2024-11-20 13:00:27.545188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.560009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.560052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:02.119 [2024-11-20 13:00:27.560064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.782 ms 00:28:02.119 [2024-11-20 13:00:27.560072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.565732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.565799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:02.119 [2024-11-20 13:00:27.565811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:28:02.119 [2024-11-20 13:00:27.565820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.591791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.591834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:02.119 [2024-11-20 13:00:27.591846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.948 ms 00:28:02.119 [2024-11-20 13:00:27.591853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.119 [2024-11-20 13:00:27.617061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.119 [2024-11-20 13:00:27.617236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:02.119 [2024-11-20 13:00:27.617269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.156 ms 00:28:02.119 [2024-11-20 13:00:27.617277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.381 [2024-11-20 13:00:27.642422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.381 [2024-11-20 13:00:27.642475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:02.381 [2024-11-20 13:00:27.642490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.785 ms 00:28:02.381 [2024-11-20 13:00:27.642497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.381 [2024-11-20 13:00:27.667491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.381 [2024-11-20 13:00:27.667537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:02.381 [2024-11-20 13:00:27.667548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.918 ms 00:28:02.381 [2024-11-20 13:00:27.667555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.381 [2024-11-20 13:00:27.667599] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:02.381 [2024-11-20 13:00:27.667615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:02.381 [2024-11-20 13:00:27.667625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:28:02.381 [2024-11-20 13:00:27.667634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:02.381 [2024-11-20 13:00:27.667724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.667996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:02.382 [2024-11-20 13:00:27.668465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:02.383 [2024-11-20 13:00:27.668474] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0a2d148-51f7-42a1-a3b5-778bfb33a11b 00:28:02.383 [2024-11-20 13:00:27.668483] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:28:02.383 [2024-11-20 13:00:27.668490] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 162496 00:28:02.383 [2024-11-20 13:00:27.668501] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160512 00:28:02.383 [2024-11-20 13:00:27.668511] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:28:02.383 [2024-11-20 13:00:27.668519] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:02.383 [2024-11-20 13:00:27.668527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:02.383 [2024-11-20 13:00:27.668534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:02.383 [2024-11-20 13:00:27.668548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:02.383 [2024-11-20 13:00:27.668555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:02.383 [2024-11-20 13:00:27.668562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.383 [2024-11-20 13:00:27.668571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:02.383 [2024-11-20 13:00:27.668580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:28:02.383 [2024-11-20 13:00:27.668588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.682238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.383 [2024-11-20 13:00:27.682287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:02.383 [2024-11-20 13:00:27.682299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.631 ms 00:28:02.383 [2024-11-20 13:00:27.682307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.682707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.383 [2024-11-20 13:00:27.682718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:02.383 [2024-11-20 13:00:27.682727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:28:02.383 [2024-11-20 13:00:27.682735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.719105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.719151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:02.383 [2024-11-20 13:00:27.719163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.719172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.719234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.719243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:02.383 [2024-11-20 13:00:27.719251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.719259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.719361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.719372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:02.383 [2024-11-20 13:00:27.719381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.719389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.719405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.719413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:02.383 [2024-11-20 13:00:27.719421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.719429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.803135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.803374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:02.383 [2024-11-20 13:00:27.803396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.803405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.873117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:02.383 [2024-11-20 13:00:27.873130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.873139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.873216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:02.383 [2024-11-20 13:00:27.873225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.873234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.873304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:02.383 [2024-11-20 13:00:27.873313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.873322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.873433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:02.383 [2024-11-20 13:00:27.873446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.873454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.873494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:02.383 [2024-11-20 13:00:27.873503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.873512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.873563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:02.383 [2024-11-20 13:00:27.873575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.873583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.383 [2024-11-20 13:00:27.873639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:02.383 [2024-11-20 13:00:27.873648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.383 [2024-11-20 13:00:27.873657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.383 [2024-11-20 13:00:27.873831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.301 ms, result 0 00:28:03.326 00:28:03.326 00:28:03.326 13:00:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:05.871 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:05.871 13:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:05.871 [2024-11-20 13:00:30.849077] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:28:05.871 [2024-11-20 13:00:30.849196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81861 ] 00:28:05.871 [2024-11-20 13:00:31.009271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.871 [2024-11-20 13:00:31.128382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.133 [2024-11-20 13:00:31.417021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:06.133 [2024-11-20 13:00:31.417099] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:06.133 [2024-11-20 13:00:31.578662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.133 [2024-11-20 13:00:31.578724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:06.133 [2024-11-20 13:00:31.578770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:06.133 [2024-11-20 13:00:31.578780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.133 [2024-11-20 13:00:31.578836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.133 [2024-11-20 13:00:31.578847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:06.133 [2024-11-20 13:00:31.578859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:06.133 [2024-11-20 13:00:31.578867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.133 [2024-11-20 13:00:31.578888] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:06.134 [2024-11-20 13:00:31.579607] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:06.134 [2024-11-20 13:00:31.579633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.579642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:06.134 [2024-11-20 13:00:31.579652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:28:06.134 [2024-11-20 13:00:31.579661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.581333] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:06.134 [2024-11-20 13:00:31.595448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.595497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:06.134 [2024-11-20 13:00:31.595510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.115 ms 00:28:06.134 [2024-11-20 13:00:31.595519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.595599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.595609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:06.134 [2024-11-20 13:00:31.595618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:06.134 [2024-11-20 13:00:31.595625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.603605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.603649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:06.134 [2024-11-20 13:00:31.603659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.904 ms 00:28:06.134 [2024-11-20 13:00:31.603667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.603775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.603786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:06.134 [2024-11-20 13:00:31.603795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:28:06.134 [2024-11-20 13:00:31.603803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.603846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.603856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:06.134 [2024-11-20 13:00:31.603864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:06.134 [2024-11-20 13:00:31.603871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.603907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:06.134 [2024-11-20 13:00:31.608080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.608115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:06.134 [2024-11-20 13:00:31.608126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.178 ms 00:28:06.134 [2024-11-20 13:00:31.608137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.608172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.608180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:06.134 [2024-11-20 13:00:31.608188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:06.134 [2024-11-20 13:00:31.608196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.608247] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:06.134 [2024-11-20 13:00:31.608270] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:06.134 [2024-11-20 13:00:31.608308] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:06.134 [2024-11-20 13:00:31.608327] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:06.134 [2024-11-20 13:00:31.608434] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:06.134 [2024-11-20 13:00:31.608446] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:06.134 [2024-11-20 13:00:31.608457] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:06.134 [2024-11-20 13:00:31.608467] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:06.134 [2024-11-20 13:00:31.608477] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:06.134 [2024-11-20 13:00:31.608486] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:06.134 [2024-11-20 13:00:31.608493] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:06.134 [2024-11-20 13:00:31.608501] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:06.134 [2024-11-20 13:00:31.608508] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:06.134 [2024-11-20 13:00:31.608520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.608528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:06.134 [2024-11-20 13:00:31.608536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:28:06.134 [2024-11-20 13:00:31.608543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.608625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.134 [2024-11-20 13:00:31.608634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:06.134 [2024-11-20 13:00:31.608641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:06.134 [2024-11-20 13:00:31.608648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.134 [2024-11-20 13:00:31.608773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:06.134 [2024-11-20 13:00:31.608787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:06.134 [2024-11-20 13:00:31.608796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:06.134 [2024-11-20 13:00:31.608804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.134 [2024-11-20 13:00:31.608811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:06.134 [2024-11-20 13:00:31.608818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:06.134 [2024-11-20 13:00:31.608825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:06.134 [2024-11-20 13:00:31.608833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:06.134 [2024-11-20 13:00:31.608841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:06.134 [2024-11-20 13:00:31.608848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:06.134 [2024-11-20 13:00:31.608855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:06.134 [2024-11-20 13:00:31.608862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:06.134 [2024-11-20 13:00:31.608871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:06.134 [2024-11-20 13:00:31.608878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:06.134 [2024-11-20 13:00:31.608886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:06.134 [2024-11-20 13:00:31.608900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.134 [2024-11-20 13:00:31.608906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:06.134 [2024-11-20 13:00:31.608913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:06.134 [2024-11-20 13:00:31.608921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.134 [2024-11-20 13:00:31.608928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:06.134 [2024-11-20 13:00:31.608935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:06.134 [2024-11-20 13:00:31.608942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.134 [2024-11-20 13:00:31.608948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:06.134 [2024-11-20 13:00:31.608955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:06.135 [2024-11-20 13:00:31.608962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.135 [2024-11-20 13:00:31.608968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:06.135 [2024-11-20 13:00:31.608975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:06.135 [2024-11-20 13:00:31.608981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.135 [2024-11-20 13:00:31.608988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:06.135 [2024-11-20 13:00:31.608994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:06.135 [2024-11-20 13:00:31.609001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:06.135 [2024-11-20 13:00:31.609008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:06.135 [2024-11-20 13:00:31.609014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:06.135 [2024-11-20 13:00:31.609020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:06.135 [2024-11-20 13:00:31.609027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:06.135 [2024-11-20 13:00:31.609033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:06.135 [2024-11-20 13:00:31.609040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:06.135 [2024-11-20 13:00:31.609046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:06.135 [2024-11-20 13:00:31.609053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:06.135 [2024-11-20 13:00:31.609067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.135 [2024-11-20 13:00:31.609074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:06.135 [2024-11-20 13:00:31.609080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:06.135 [2024-11-20 13:00:31.609086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.135 [2024-11-20 13:00:31.609092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:06.135 [2024-11-20 13:00:31.609103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:06.135 [2024-11-20 13:00:31.609111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:06.135 [2024-11-20 13:00:31.609119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:06.135 [2024-11-20 13:00:31.609126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:06.135 [2024-11-20 13:00:31.609133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:06.135 [2024-11-20 13:00:31.609140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:06.135 [2024-11-20 13:00:31.609146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:06.135 [2024-11-20 13:00:31.609153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:06.135 [2024-11-20 13:00:31.609159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:06.135 [2024-11-20 13:00:31.609168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:06.135 [2024-11-20 13:00:31.609178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.135 [2024-11-20 13:00:31.609187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:06.135 [2024-11-20 13:00:31.609194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:06.135 [2024-11-20 13:00:31.609201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:06.135 [2024-11-20 13:00:31.609208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:06.135 [2024-11-20 13:00:31.609215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:06.135 [2024-11-20 13:00:31.609222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:06.135 [2024-11-20 13:00:31.609229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:06.135 [2024-11-20 13:00:31.609235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:06.135 [2024-11-20 13:00:31.609242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:06.135 [2024-11-20 13:00:31.609248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:06.135 [2024-11-20 13:00:31.609255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:06.135 [2024-11-20 13:00:31.609263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:06.135 [2024-11-20 13:00:31.609270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:06.135 [2024-11-20 13:00:31.609276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:06.135 [2024-11-20 13:00:31.609283] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:06.135 [2024-11-20 13:00:31.609294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:06.135 [2024-11-20 13:00:31.609303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:06.135 [2024-11-20 13:00:31.609310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:06.135 [2024-11-20 13:00:31.609317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:06.135 [2024-11-20 13:00:31.609325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:06.135 [2024-11-20 13:00:31.609332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.135 [2024-11-20 13:00:31.609341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:06.135 [2024-11-20 13:00:31.609349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:28:06.135 [2024-11-20 13:00:31.609357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.135 [2024-11-20 13:00:31.641142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.135 [2024-11-20 13:00:31.641193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:06.135 [2024-11-20 13:00:31.641204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.741 ms 00:28:06.135 [2024-11-20 13:00:31.641212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.135 [2024-11-20 13:00:31.641305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.135 [2024-11-20 13:00:31.641313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:06.135 [2024-11-20 13:00:31.641322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:06.135 [2024-11-20 13:00:31.641329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.397 [2024-11-20 13:00:31.688617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.397 [2024-11-20 13:00:31.688843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:06.397 [2024-11-20 13:00:31.688873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.232 ms 00:28:06.397 [2024-11-20 13:00:31.688883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.397 [2024-11-20 13:00:31.688931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.397 [2024-11-20 13:00:31.688941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:06.397 [2024-11-20 13:00:31.688951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:06.397 [2024-11-20 13:00:31.688963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.397 [2024-11-20 13:00:31.689524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.397 [2024-11-20 13:00:31.689559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:06.397 [2024-11-20 13:00:31.689570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:28:06.397 [2024-11-20 13:00:31.689579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.397 [2024-11-20 13:00:31.689757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.397 [2024-11-20 13:00:31.689775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:06.397 [2024-11-20 13:00:31.689784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:28:06.397 [2024-11-20 13:00:31.689796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.397 [2024-11-20 13:00:31.705343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.397 [2024-11-20 13:00:31.705387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:06.397 [2024-11-20 13:00:31.705402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.526 ms 00:28:06.397 [2024-11-20 13:00:31.705410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.719719] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:06.398 [2024-11-20 13:00:31.719774] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:06.398 [2024-11-20 13:00:31.719788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.719796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:06.398 [2024-11-20 13:00:31.719805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.274 ms 00:28:06.398 [2024-11-20 13:00:31.719812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.745535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.745592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:06.398 [2024-11-20 13:00:31.745604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.671 ms 00:28:06.398 [2024-11-20 13:00:31.745612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.758551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.758597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:06.398 [2024-11-20 13:00:31.758608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.882 ms 00:28:06.398 [2024-11-20 13:00:31.758615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.771371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.771415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:06.398 [2024-11-20 13:00:31.771426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.713 ms 00:28:06.398 [2024-11-20 13:00:31.771433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.772127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.772155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:06.398 [2024-11-20 13:00:31.772166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:28:06.398 [2024-11-20 13:00:31.772178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.836856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.836920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:06.398 [2024-11-20 13:00:31.836942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.658 ms 00:28:06.398 [2024-11-20 13:00:31.836951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.848184] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:06.398 [2024-11-20 13:00:31.851518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.851563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:06.398 [2024-11-20 13:00:31.851576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.507 ms 00:28:06.398 [2024-11-20 13:00:31.851585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.851671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.851683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:06.398 [2024-11-20 13:00:31.851693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:06.398 [2024-11-20 13:00:31.851705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.852611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.852649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:06.398 [2024-11-20 13:00:31.852661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:28:06.398 [2024-11-20 13:00:31.852671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.852707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.852717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:06.398 [2024-11-20 13:00:31.852727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:06.398 [2024-11-20 13:00:31.852752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.852792] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:06.398 [2024-11-20 13:00:31.852807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.852816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:06.398 [2024-11-20 13:00:31.852824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:06.398 [2024-11-20 13:00:31.852832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.878045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.878229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:06.398 [2024-11-20 13:00:31.878250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.194 ms 00:28:06.398 [2024-11-20 13:00:31.878266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.878346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.398 [2024-11-20 13:00:31.878357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:06.398 [2024-11-20 13:00:31.878366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:06.398 [2024-11-20 13:00:31.878373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.398 [2024-11-20 13:00:31.879708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.566 ms, result 0 00:28:07.783  [2024-11-20T13:00:34.246Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-20T13:00:35.280Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-20T13:00:36.225Z] Copying: 59/1024 [MB] (18 MBps) [2024-11-20T13:00:37.166Z] Copying: 91/1024 [MB] (32 MBps) [2024-11-20T13:00:38.110Z] Copying: 112/1024 [MB] (20 MBps) [2024-11-20T13:00:39.489Z] Copying: 140/1024 [MB] (27 MBps) [2024-11-20T13:00:40.429Z] Copying: 162/1024 [MB] (21 MBps) [2024-11-20T13:00:41.370Z] Copying: 185/1024 [MB] (23 MBps) [2024-11-20T13:00:42.310Z] Copying: 202/1024 [MB] (17 MBps) [2024-11-20T13:00:43.251Z] Copying: 223/1024 [MB] (21 MBps) [2024-11-20T13:00:44.193Z] Copying: 236/1024 [MB] (13 MBps) [2024-11-20T13:00:45.135Z] Copying: 261/1024 [MB] (24 MBps) [2024-11-20T13:00:46.076Z] Copying: 273/1024 [MB] (11 MBps) [2024-11-20T13:00:47.457Z] Copying: 283/1024 [MB] (10 MBps) [2024-11-20T13:00:48.395Z] Copying: 301/1024 [MB] (17 MBps) [2024-11-20T13:00:49.347Z] Copying: 316/1024 [MB] (14 MBps) [2024-11-20T13:00:50.293Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-20T13:00:51.234Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-20T13:00:52.176Z] Copying: 357/1024 [MB] (19 MBps) [2024-11-20T13:00:53.119Z] Copying: 369/1024 [MB] (12 MBps) [2024-11-20T13:00:54.061Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-20T13:00:55.444Z] Copying: 394/1024 [MB] (14 MBps) [2024-11-20T13:00:56.376Z] Copying: 418/1024 [MB] (24 MBps) [2024-11-20T13:00:57.319Z] Copying: 435/1024 [MB] (16 MBps) [2024-11-20T13:00:58.268Z] Copying: 448/1024 [MB] (12 MBps) [2024-11-20T13:00:59.242Z] Copying: 466/1024 [MB] (18 MBps) [2024-11-20T13:01:00.175Z] Copying: 489/1024 [MB] (23 MBps) [2024-11-20T13:01:01.113Z] Copying: 503/1024 [MB] (13 MBps) [2024-11-20T13:01:02.498Z] Copying: 517/1024 [MB] (13 MBps) [2024-11-20T13:01:03.071Z] Copying: 539/1024 [MB] (22 MBps) [2024-11-20T13:01:04.456Z] Copying: 554/1024 [MB] (14 MBps) [2024-11-20T13:01:05.398Z] Copying: 571/1024 [MB] (16 MBps) [2024-11-20T13:01:06.341Z] Copying: 583/1024 [MB] (12 MBps) [2024-11-20T13:01:07.285Z] Copying: 606/1024 [MB] (23 MBps) [2024-11-20T13:01:08.231Z] Copying: 623/1024 [MB] (16 MBps) [2024-11-20T13:01:09.177Z] Copying: 640/1024 [MB] (17 MBps) [2024-11-20T13:01:10.120Z] Copying: 658/1024 [MB] (17 MBps) [2024-11-20T13:01:11.065Z] Copying: 680/1024 [MB] (22 MBps) [2024-11-20T13:01:12.454Z] Copying: 693/1024 [MB] (12 MBps) [2024-11-20T13:01:13.398Z] Copying: 703/1024 [MB] (10 MBps) [2024-11-20T13:01:14.341Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-20T13:01:15.286Z] Copying: 728/1024 [MB] (14 MBps) [2024-11-20T13:01:16.249Z] Copying: 744/1024 [MB] (16 MBps) [2024-11-20T13:01:17.191Z] Copying: 755/1024 [MB] (10 MBps) [2024-11-20T13:01:18.134Z] Copying: 770/1024 [MB] (14 MBps) [2024-11-20T13:01:19.072Z] Copying: 790/1024 [MB] (20 MBps) [2024-11-20T13:01:20.446Z] Copying: 810/1024 [MB] (20 MBps) [2024-11-20T13:01:21.378Z] Copying: 824/1024 [MB] (13 MBps) [2024-11-20T13:01:22.315Z] Copying: 842/1024 [MB] (18 MBps) [2024-11-20T13:01:23.259Z] Copying: 858/1024 [MB] (15 MBps) [2024-11-20T13:01:24.201Z] Copying: 878/1024 [MB] (20 MBps) [2024-11-20T13:01:25.143Z] Copying: 895/1024 [MB] (17 MBps) [2024-11-20T13:01:26.087Z] Copying: 916/1024 [MB] (21 MBps) [2024-11-20T13:01:27.473Z] Copying: 935/1024 [MB] (18 MBps) [2024-11-20T13:01:28.410Z] Copying: 950/1024 [MB] (15 MBps) [2024-11-20T13:01:29.348Z] Copying: 971/1024 [MB] (20 MBps) [2024-11-20T13:01:30.285Z] Copying: 989/1024 [MB] (17 MBps) [2024-11-20T13:01:30.855Z] Copying: 1009/1024 [MB] (20 MBps) [2024-11-20T13:01:30.855Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-20 13:01:30.823679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.336 [2024-11-20 13:01:30.823779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:05.336 [2024-11-20 13:01:30.823801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:05.336 [2024-11-20 13:01:30.823814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.336 [2024-11-20 13:01:30.823846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:05.336 [2024-11-20 13:01:30.827957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.336 [2024-11-20 13:01:30.828003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:05.336 [2024-11-20 13:01:30.828024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.090 ms 00:29:05.336 [2024-11-20 13:01:30.828037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.336 [2024-11-20 13:01:30.828349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.336 [2024-11-20 13:01:30.828364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:05.336 [2024-11-20 13:01:30.828376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:29:05.336 [2024-11-20 13:01:30.828387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.336 [2024-11-20 13:01:30.833993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.336 [2024-11-20 13:01:30.834024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:05.336 [2024-11-20 13:01:30.834035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.585 ms 00:29:05.336 [2024-11-20 13:01:30.834043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.336 [2024-11-20 13:01:30.840231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.336 [2024-11-20 13:01:30.840259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:05.336 [2024-11-20 13:01:30.840270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.163 ms 00:29:05.336 [2024-11-20 13:01:30.840277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.598 [2024-11-20 13:01:30.866203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.598 [2024-11-20 13:01:30.866243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:05.598 [2024-11-20 13:01:30.866255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.875 ms 00:29:05.598 [2024-11-20 13:01:30.866262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.598 [2024-11-20 13:01:30.881704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.598 [2024-11-20 13:01:30.881763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:05.598 [2024-11-20 13:01:30.881776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.403 ms 00:29:05.598 [2024-11-20 13:01:30.881784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.598 [2024-11-20 13:01:30.886702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.598 [2024-11-20 13:01:30.886762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:05.598 [2024-11-20 13:01:30.886774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.873 ms 00:29:05.599 [2024-11-20 13:01:30.886781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.599 [2024-11-20 13:01:30.912426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.599 [2024-11-20 13:01:30.912467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:05.599 [2024-11-20 13:01:30.912478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.628 ms 00:29:05.599 [2024-11-20 13:01:30.912486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.599 [2024-11-20 13:01:30.937537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.599 [2024-11-20 13:01:30.937595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:05.599 [2024-11-20 13:01:30.937607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.007 ms 00:29:05.599 [2024-11-20 13:01:30.937615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.599 [2024-11-20 13:01:30.962863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.599 [2024-11-20 13:01:30.962908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:05.599 [2024-11-20 13:01:30.962920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.200 ms 00:29:05.599 [2024-11-20 13:01:30.962927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.599 [2024-11-20 13:01:30.987621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.599 [2024-11-20 13:01:30.987668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:05.599 [2024-11-20 13:01:30.987680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.604 ms 00:29:05.599 [2024-11-20 13:01:30.987688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.599 [2024-11-20 13:01:30.987734] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:05.599 [2024-11-20 13:01:30.987767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:05.599 [2024-11-20 13:01:30.987788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:29:05.599 [2024-11-20 13:01:30.987797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.987999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:05.599 [2024-11-20 13:01:30.988378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:05.600 [2024-11-20 13:01:30.988616] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:05.600 [2024-11-20 13:01:30.988630] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0a2d148-51f7-42a1-a3b5-778bfb33a11b 00:29:05.600 [2024-11-20 13:01:30.988638] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:29:05.600 [2024-11-20 13:01:30.988645] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:05.600 [2024-11-20 13:01:30.988653] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:05.600 [2024-11-20 13:01:30.988662] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:05.600 [2024-11-20 13:01:30.988670] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:05.600 [2024-11-20 13:01:30.988678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:05.600 [2024-11-20 13:01:30.988696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:05.600 [2024-11-20 13:01:30.988703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:05.600 [2024-11-20 13:01:30.988709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:05.600 [2024-11-20 13:01:30.988717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.600 [2024-11-20 13:01:30.988726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:05.600 [2024-11-20 13:01:30.988736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:29:05.600 [2024-11-20 13:01:30.988759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.600 [2024-11-20 13:01:31.003163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.600 [2024-11-20 13:01:31.003204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:05.600 [2024-11-20 13:01:31.003216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.380 ms 00:29:05.600 [2024-11-20 13:01:31.003224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.600 [2024-11-20 13:01:31.003645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.600 [2024-11-20 13:01:31.003656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:05.600 [2024-11-20 13:01:31.003674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:29:05.600 [2024-11-20 13:01:31.003682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.600 [2024-11-20 13:01:31.042350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.600 [2024-11-20 13:01:31.042398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.600 [2024-11-20 13:01:31.042411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.600 [2024-11-20 13:01:31.042422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.600 [2024-11-20 13:01:31.042490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.600 [2024-11-20 13:01:31.042502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.600 [2024-11-20 13:01:31.042517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.600 [2024-11-20 13:01:31.042528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.600 [2024-11-20 13:01:31.042601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.600 [2024-11-20 13:01:31.042612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.600 [2024-11-20 13:01:31.042622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.600 [2024-11-20 13:01:31.042631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.600 [2024-11-20 13:01:31.042647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.600 [2024-11-20 13:01:31.042655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.600 [2024-11-20 13:01:31.042663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.600 [2024-11-20 13:01:31.042675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.134523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.134592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.862 [2024-11-20 13:01:31.134607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.134616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.209921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.209992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.862 [2024-11-20 13:01:31.210007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.210023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.210131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.210145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:05.862 [2024-11-20 13:01:31.210155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.210165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.210210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.210222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:05.862 [2024-11-20 13:01:31.210231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.210240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.210354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.210368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:05.862 [2024-11-20 13:01:31.210378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.210389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.210423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.210435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:05.862 [2024-11-20 13:01:31.210445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.210454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.210510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.210523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:05.862 [2024-11-20 13:01:31.210532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.210541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.210603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.862 [2024-11-20 13:01:31.210614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:05.862 [2024-11-20 13:01:31.210626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.862 [2024-11-20 13:01:31.210635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.862 [2024-11-20 13:01:31.210850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.077 ms, result 0 00:29:06.806 00:29:06.806 00:29:06.807 13:01:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:09.355 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80151 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80151 ']' 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80151 00:29:09.355 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80151) - No such process 00:29:09.355 Process with pid 80151 is not found 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80151 is not found' 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:09.355 Remove shared memory files 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:09.355 ************************************ 00:29:09.355 END TEST ftl_dirty_shutdown 00:29:09.355 ************************************ 00:29:09.355 00:29:09.355 real 3m44.197s 00:29:09.355 user 3m59.302s 00:29:09.355 sys 0m23.154s 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.355 13:01:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.355 13:01:34 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:09.355 13:01:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:09.355 13:01:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.355 13:01:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:09.355 ************************************ 00:29:09.355 START TEST ftl_upgrade_shutdown 00:29:09.355 ************************************ 00:29:09.355 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:09.617 * Looking for test storage... 00:29:09.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.617 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:09.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.618 --rc genhtml_branch_coverage=1 00:29:09.618 --rc genhtml_function_coverage=1 00:29:09.618 --rc genhtml_legend=1 00:29:09.618 --rc geninfo_all_blocks=1 00:29:09.618 --rc geninfo_unexecuted_blocks=1 00:29:09.618 00:29:09.618 ' 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:09.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.618 --rc genhtml_branch_coverage=1 00:29:09.618 --rc genhtml_function_coverage=1 00:29:09.618 --rc genhtml_legend=1 00:29:09.618 --rc geninfo_all_blocks=1 00:29:09.618 --rc geninfo_unexecuted_blocks=1 00:29:09.618 00:29:09.618 ' 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:09.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.618 --rc genhtml_branch_coverage=1 00:29:09.618 --rc genhtml_function_coverage=1 00:29:09.618 --rc genhtml_legend=1 00:29:09.618 --rc geninfo_all_blocks=1 00:29:09.618 --rc geninfo_unexecuted_blocks=1 00:29:09.618 00:29:09.618 ' 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:09.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.618 --rc genhtml_branch_coverage=1 00:29:09.618 --rc genhtml_function_coverage=1 00:29:09.618 --rc genhtml_legend=1 00:29:09.618 --rc geninfo_all_blocks=1 00:29:09.618 --rc geninfo_unexecuted_blocks=1 00:29:09.618 00:29:09.618 ' 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.618 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82581 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:09.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82581 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82581 ']' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.618 13:01:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.618 [2024-11-20 13:01:35.114297] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:29:09.618 [2024-11-20 13:01:35.115259] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82581 ] 00:29:09.880 [2024-11-20 13:01:35.285231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.141 [2024-11-20 13:01:35.432386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:11.085 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:11.086 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:11.086 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:11.086 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:11.086 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:11.086 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:11.347 { 00:29:11.347 "name": "basen1", 00:29:11.347 "aliases": [ 00:29:11.347 "ecf50b38-a2d5-4d3a-b844-31f445e9d0f1" 00:29:11.347 ], 00:29:11.347 "product_name": "NVMe disk", 00:29:11.347 "block_size": 4096, 00:29:11.347 "num_blocks": 1310720, 00:29:11.347 "uuid": "ecf50b38-a2d5-4d3a-b844-31f445e9d0f1", 00:29:11.347 "numa_id": -1, 00:29:11.347 "assigned_rate_limits": { 00:29:11.347 "rw_ios_per_sec": 0, 00:29:11.347 "rw_mbytes_per_sec": 0, 00:29:11.347 "r_mbytes_per_sec": 0, 00:29:11.347 "w_mbytes_per_sec": 0 00:29:11.347 }, 00:29:11.347 "claimed": true, 00:29:11.347 "claim_type": "read_many_write_one", 00:29:11.347 "zoned": false, 00:29:11.347 "supported_io_types": { 00:29:11.347 "read": true, 00:29:11.347 "write": true, 00:29:11.347 "unmap": true, 00:29:11.347 "flush": true, 00:29:11.347 "reset": true, 00:29:11.347 "nvme_admin": true, 00:29:11.347 "nvme_io": true, 00:29:11.347 "nvme_io_md": false, 00:29:11.347 "write_zeroes": true, 00:29:11.347 "zcopy": false, 00:29:11.347 "get_zone_info": false, 00:29:11.347 "zone_management": false, 00:29:11.347 "zone_append": false, 00:29:11.347 "compare": true, 00:29:11.347 "compare_and_write": false, 00:29:11.347 "abort": true, 00:29:11.347 "seek_hole": false, 00:29:11.347 "seek_data": false, 00:29:11.347 "copy": true, 00:29:11.347 "nvme_iov_md": false 00:29:11.347 }, 00:29:11.347 "driver_specific": { 00:29:11.347 "nvme": [ 00:29:11.347 { 00:29:11.347 "pci_address": "0000:00:11.0", 00:29:11.347 "trid": { 00:29:11.347 "trtype": "PCIe", 00:29:11.347 "traddr": "0000:00:11.0" 00:29:11.347 }, 00:29:11.347 "ctrlr_data": { 00:29:11.347 "cntlid": 0, 00:29:11.347 "vendor_id": "0x1b36", 00:29:11.347 "model_number": "QEMU NVMe Ctrl", 00:29:11.347 "serial_number": "12341", 00:29:11.347 "firmware_revision": "8.0.0", 00:29:11.347 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:11.347 "oacs": { 00:29:11.347 "security": 0, 00:29:11.347 "format": 1, 00:29:11.347 "firmware": 0, 00:29:11.347 "ns_manage": 1 00:29:11.347 }, 00:29:11.347 "multi_ctrlr": false, 00:29:11.347 "ana_reporting": false 00:29:11.347 }, 00:29:11.347 "vs": { 00:29:11.347 "nvme_version": "1.4" 00:29:11.347 }, 00:29:11.347 "ns_data": { 00:29:11.347 "id": 1, 00:29:11.347 "can_share": false 00:29:11.347 } 00:29:11.347 } 00:29:11.347 ], 00:29:11.347 "mp_policy": "active_passive" 00:29:11.347 } 00:29:11.347 } 00:29:11.347 ]' 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:11.347 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:11.609 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e9f75d8f-69d6-4953-85e4-ce3927fa5828 00:29:11.609 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:11.609 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e9f75d8f-69d6-4953-85e4-ce3927fa5828 00:29:11.870 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:12.168 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=60f92bfc-5bb5-4693-b2de-3de8a56539a8 00:29:12.168 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 60f92bfc-5bb5-4693-b2de-3de8a56539a8 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=eaf577b7-f84a-4bbd-90c4-f7abf7277880 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z eaf577b7-f84a-4bbd-90c4-f7abf7277880 ]] 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 eaf577b7-f84a-4bbd-90c4-f7abf7277880 5120 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=eaf577b7-f84a-4bbd-90c4-f7abf7277880 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size eaf577b7-f84a-4bbd-90c4-f7abf7277880 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=eaf577b7-f84a-4bbd-90c4-f7abf7277880 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:12.436 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eaf577b7-f84a-4bbd-90c4-f7abf7277880 00:29:12.698 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:12.698 { 00:29:12.698 "name": "eaf577b7-f84a-4bbd-90c4-f7abf7277880", 00:29:12.698 "aliases": [ 00:29:12.698 "lvs/basen1p0" 00:29:12.698 ], 00:29:12.698 "product_name": "Logical Volume", 00:29:12.698 "block_size": 4096, 00:29:12.698 "num_blocks": 5242880, 00:29:12.698 "uuid": "eaf577b7-f84a-4bbd-90c4-f7abf7277880", 00:29:12.698 "assigned_rate_limits": { 00:29:12.698 "rw_ios_per_sec": 0, 00:29:12.698 "rw_mbytes_per_sec": 0, 00:29:12.698 "r_mbytes_per_sec": 0, 00:29:12.698 "w_mbytes_per_sec": 0 00:29:12.698 }, 00:29:12.698 "claimed": false, 00:29:12.698 "zoned": false, 00:29:12.698 "supported_io_types": { 00:29:12.698 "read": true, 00:29:12.698 "write": true, 00:29:12.698 "unmap": true, 00:29:12.698 "flush": false, 00:29:12.698 "reset": true, 00:29:12.698 "nvme_admin": false, 00:29:12.698 "nvme_io": false, 00:29:12.698 "nvme_io_md": false, 00:29:12.698 "write_zeroes": true, 00:29:12.698 "zcopy": false, 00:29:12.698 "get_zone_info": false, 00:29:12.698 "zone_management": false, 00:29:12.698 "zone_append": false, 00:29:12.698 "compare": false, 00:29:12.698 "compare_and_write": false, 00:29:12.698 "abort": false, 00:29:12.698 "seek_hole": true, 00:29:12.698 "seek_data": true, 00:29:12.698 "copy": false, 00:29:12.698 "nvme_iov_md": false 00:29:12.698 }, 00:29:12.698 "driver_specific": { 00:29:12.698 "lvol": { 00:29:12.698 "lvol_store_uuid": "60f92bfc-5bb5-4693-b2de-3de8a56539a8", 00:29:12.698 "base_bdev": "basen1", 00:29:12.698 "thin_provision": true, 00:29:12.698 "num_allocated_clusters": 0, 00:29:12.698 "snapshot": false, 00:29:12.698 "clone": false, 00:29:12.698 "esnap_clone": false 00:29:12.698 } 00:29:12.698 } 00:29:12.698 } 00:29:12.698 ]' 00:29:12.698 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:12.698 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:12.960 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:12.960 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:12.960 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:13.222 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:13.222 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:13.222 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d eaf577b7-f84a-4bbd-90c4-f7abf7277880 -c cachen1p0 --l2p_dram_limit 2 00:29:13.222 [2024-11-20 13:01:38.716755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.222 [2024-11-20 13:01:38.716822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:13.222 [2024-11-20 13:01:38.716842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:13.222 [2024-11-20 13:01:38.716852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.222 [2024-11-20 13:01:38.716918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.222 [2024-11-20 13:01:38.716930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:13.222 [2024-11-20 13:01:38.716941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:13.222 [2024-11-20 13:01:38.716949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.222 [2024-11-20 13:01:38.716973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:13.222 [2024-11-20 13:01:38.717775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:13.222 [2024-11-20 13:01:38.717804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.222 [2024-11-20 13:01:38.717814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:13.222 [2024-11-20 13:01:38.717827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.833 ms 00:29:13.222 [2024-11-20 13:01:38.717835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.222 [2024-11-20 13:01:38.717878] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 0f9352ec-5ea0-423d-a893-6fc813d38d02 00:29:13.222 [2024-11-20 13:01:38.720244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.222 [2024-11-20 13:01:38.720301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:13.222 [2024-11-20 13:01:38.720312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:13.222 [2024-11-20 13:01:38.720324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.222 [2024-11-20 13:01:38.733028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.222 [2024-11-20 13:01:38.733242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:13.222 [2024-11-20 13:01:38.733266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.613 ms 00:29:13.222 [2024-11-20 13:01:38.733278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.222 [2024-11-20 13:01:38.733332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.222 [2024-11-20 13:01:38.733344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:13.222 [2024-11-20 13:01:38.733353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:13.222 [2024-11-20 13:01:38.733367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.222 [2024-11-20 13:01:38.733425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.222 [2024-11-20 13:01:38.733438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:13.222 [2024-11-20 13:01:38.733446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:13.222 [2024-11-20 13:01:38.733463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.222 [2024-11-20 13:01:38.733486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:13.485 [2024-11-20 13:01:38.738513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.485 [2024-11-20 13:01:38.738558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:13.485 [2024-11-20 13:01:38.738575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.029 ms 00:29:13.485 [2024-11-20 13:01:38.738584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.485 [2024-11-20 13:01:38.738620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.485 [2024-11-20 13:01:38.738629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:13.485 [2024-11-20 13:01:38.738640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:13.485 [2024-11-20 13:01:38.738649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.485 [2024-11-20 13:01:38.738690] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:13.485 [2024-11-20 13:01:38.738869] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:13.485 [2024-11-20 13:01:38.738890] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:13.485 [2024-11-20 13:01:38.738902] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:13.485 [2024-11-20 13:01:38.738916] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:13.485 [2024-11-20 13:01:38.738927] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:13.485 [2024-11-20 13:01:38.738941] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:13.485 [2024-11-20 13:01:38.738949] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:13.485 [2024-11-20 13:01:38.738963] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:13.485 [2024-11-20 13:01:38.738971] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:13.485 [2024-11-20 13:01:38.738982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.485 [2024-11-20 13:01:38.738990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:13.485 [2024-11-20 13:01:38.739002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:29:13.485 [2024-11-20 13:01:38.739010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.485 [2024-11-20 13:01:38.739097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.485 [2024-11-20 13:01:38.739108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:13.485 [2024-11-20 13:01:38.739121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:13.485 [2024-11-20 13:01:38.739139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.485 [2024-11-20 13:01:38.739251] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:13.485 [2024-11-20 13:01:38.739263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:13.485 [2024-11-20 13:01:38.739275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:13.485 [2024-11-20 13:01:38.739284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:13.485 [2024-11-20 13:01:38.739302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:13.485 [2024-11-20 13:01:38.739319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:13.485 [2024-11-20 13:01:38.739328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:13.485 [2024-11-20 13:01:38.739336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:13.485 [2024-11-20 13:01:38.739353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:13.485 [2024-11-20 13:01:38.739363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:13.485 [2024-11-20 13:01:38.739387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:13.485 [2024-11-20 13:01:38.739395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:13.485 [2024-11-20 13:01:38.739414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:13.485 [2024-11-20 13:01:38.739424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:13.485 [2024-11-20 13:01:38.739445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:13.485 [2024-11-20 13:01:38.739452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.485 [2024-11-20 13:01:38.739462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:13.485 [2024-11-20 13:01:38.739469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:13.485 [2024-11-20 13:01:38.739478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.485 [2024-11-20 13:01:38.739484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:13.485 [2024-11-20 13:01:38.739494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:13.485 [2024-11-20 13:01:38.739500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.485 [2024-11-20 13:01:38.739509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:13.485 [2024-11-20 13:01:38.739515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:13.485 [2024-11-20 13:01:38.739524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.485 [2024-11-20 13:01:38.739531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:13.485 [2024-11-20 13:01:38.739542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:13.485 [2024-11-20 13:01:38.739548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:13.485 [2024-11-20 13:01:38.739564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:13.485 [2024-11-20 13:01:38.739574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:13.485 [2024-11-20 13:01:38.739590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:13.485 [2024-11-20 13:01:38.739612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:13.485 [2024-11-20 13:01:38.739621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739628] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:13.485 [2024-11-20 13:01:38.739654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:13.485 [2024-11-20 13:01:38.739665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:13.485 [2024-11-20 13:01:38.739678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.485 [2024-11-20 13:01:38.739687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:13.485 [2024-11-20 13:01:38.739700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:13.486 [2024-11-20 13:01:38.739707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:13.486 [2024-11-20 13:01:38.739717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:13.486 [2024-11-20 13:01:38.739724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:13.486 [2024-11-20 13:01:38.739733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:13.486 [2024-11-20 13:01:38.739761] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:13.486 [2024-11-20 13:01:38.739775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:13.486 [2024-11-20 13:01:38.739796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:13.486 [2024-11-20 13:01:38.739819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:13.486 [2024-11-20 13:01:38.739828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:13.486 [2024-11-20 13:01:38.739835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:13.486 [2024-11-20 13:01:38.739844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:13.486 [2024-11-20 13:01:38.739920] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:13.486 [2024-11-20 13:01:38.739932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:13.486 [2024-11-20 13:01:38.739951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:13.486 [2024-11-20 13:01:38.739958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:13.486 [2024-11-20 13:01:38.739968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:13.486 [2024-11-20 13:01:38.739976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.486 [2024-11-20 13:01:38.739986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:13.486 [2024-11-20 13:01:38.739996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.798 ms 00:29:13.486 [2024-11-20 13:01:38.740007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.486 [2024-11-20 13:01:38.740050] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:13.486 [2024-11-20 13:01:38.740065] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:17.683 [2024-11-20 13:01:42.404035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.404094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:17.683 [2024-11-20 13:01:42.404107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3663.970 ms 00:29:17.683 [2024-11-20 13:01:42.404116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.683 [2024-11-20 13:01:42.427502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.427665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:17.683 [2024-11-20 13:01:42.427682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.210 ms 00:29:17.683 [2024-11-20 13:01:42.427691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.683 [2024-11-20 13:01:42.427764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.427774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:17.683 [2024-11-20 13:01:42.427782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:17.683 [2024-11-20 13:01:42.427792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.683 [2024-11-20 13:01:42.454484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.454610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:17.683 [2024-11-20 13:01:42.454624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.652 ms 00:29:17.683 [2024-11-20 13:01:42.454632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.683 [2024-11-20 13:01:42.454656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.454669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:17.683 [2024-11-20 13:01:42.454676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:17.683 [2024-11-20 13:01:42.454683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.683 [2024-11-20 13:01:42.455095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.455114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:17.683 [2024-11-20 13:01:42.455122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.377 ms 00:29:17.683 [2024-11-20 13:01:42.455130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.683 [2024-11-20 13:01:42.455169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.455178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:17.683 [2024-11-20 13:01:42.455188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:17.683 [2024-11-20 13:01:42.455197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.683 [2024-11-20 13:01:42.468201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.683 [2024-11-20 13:01:42.468231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:17.684 [2024-11-20 13:01:42.468239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.989 ms 00:29:17.684 [2024-11-20 13:01:42.468246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.478163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:17.684 [2024-11-20 13:01:42.479103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.479126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:17.684 [2024-11-20 13:01:42.479136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.799 ms 00:29:17.684 [2024-11-20 13:01:42.479142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.509695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.509727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:17.684 [2024-11-20 13:01:42.509749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.531 ms 00:29:17.684 [2024-11-20 13:01:42.509757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.509831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.509842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:17.684 [2024-11-20 13:01:42.509853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:17.684 [2024-11-20 13:01:42.509859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.527832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.527859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:17.684 [2024-11-20 13:01:42.527870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.946 ms 00:29:17.684 [2024-11-20 13:01:42.527877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.545691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.545716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:17.684 [2024-11-20 13:01:42.545726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.769 ms 00:29:17.684 [2024-11-20 13:01:42.545732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.546185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.546194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:17.684 [2024-11-20 13:01:42.546203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.415 ms 00:29:17.684 [2024-11-20 13:01:42.546209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.609499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.609526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:17.684 [2024-11-20 13:01:42.609539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 63.263 ms 00:29:17.684 [2024-11-20 13:01:42.609547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.629539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.629566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:17.684 [2024-11-20 13:01:42.629581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.938 ms 00:29:17.684 [2024-11-20 13:01:42.629588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.647598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.647723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:17.684 [2024-11-20 13:01:42.647750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.978 ms 00:29:17.684 [2024-11-20 13:01:42.647757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.666472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.666569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:17.684 [2024-11-20 13:01:42.666585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.686 ms 00:29:17.684 [2024-11-20 13:01:42.666591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.666622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.666629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:17.684 [2024-11-20 13:01:42.666640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:17.684 [2024-11-20 13:01:42.666646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.666711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.684 [2024-11-20 13:01:42.666719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:17.684 [2024-11-20 13:01:42.666729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:17.684 [2024-11-20 13:01:42.666735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.684 [2024-11-20 13:01:42.667561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3950.461 ms, result 0 00:29:17.684 { 00:29:17.684 "name": "ftl", 00:29:17.684 "uuid": "0f9352ec-5ea0-423d-a893-6fc813d38d02" 00:29:17.684 } 00:29:17.684 13:01:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:17.684 [2024-11-20 13:01:42.874958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.684 13:01:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:17.684 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:17.943 [2024-11-20 13:01:43.251237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:17.943 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:17.943 [2024-11-20 13:01:43.443541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:18.201 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:18.459 Fill FTL, iteration 1 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82706 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82706 /var/tmp/spdk.tgt.sock 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82706 ']' 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:18.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.459 13:01:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:18.459 [2024-11-20 13:01:43.850302] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:29:18.459 [2024-11-20 13:01:43.850568] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82706 ] 00:29:18.717 [2024-11-20 13:01:44.010244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.717 [2024-11-20 13:01:44.106555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.284 13:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.284 13:01:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:19.284 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:19.542 ftln1 00:29:19.542 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:19.542 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82706 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82706 ']' 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82706 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82706 00:29:19.801 killing process with pid 82706 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82706' 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82706 00:29:19.801 13:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82706 00:29:21.176 13:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:21.176 13:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:21.176 [2024-11-20 13:01:46.652989] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:29:21.176 [2024-11-20 13:01:46.653102] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82748 ] 00:29:21.434 [2024-11-20 13:01:46.809062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.434 [2024-11-20 13:01:46.882118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.818  [2024-11-20T13:01:49.278Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-20T13:01:50.221Z] Copying: 519/1024 [MB] (263 MBps) [2024-11-20T13:01:51.173Z] Copying: 775/1024 [MB] (256 MBps) [2024-11-20T13:01:51.745Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:29:26.226 00:29:26.226 Calculate MD5 checksum, iteration 1 00:29:26.226 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:26.227 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:26.227 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:26.227 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:26.227 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:26.227 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:26.227 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:26.227 13:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:26.486 [2024-11-20 13:01:51.740830] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:29:26.486 [2024-11-20 13:01:51.740917] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82805 ] 00:29:26.486 [2024-11-20 13:01:51.889067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.486 [2024-11-20 13:01:51.963164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.867  [2024-11-20T13:01:53.957Z] Copying: 668/1024 [MB] (668 MBps) [2024-11-20T13:01:54.524Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:29:29.005 00:29:29.005 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:29.005 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=442d189700081eb67e8e024710359734 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:30.908 Fill FTL, iteration 2 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:30.908 13:01:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:30.908 [2024-11-20 13:01:56.396494] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:29:30.908 [2024-11-20 13:01:56.396814] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82857 ] 00:29:31.167 [2024-11-20 13:01:56.556235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.167 [2024-11-20 13:01:56.631829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.549  [2024-11-20T13:01:59.006Z] Copying: 255/1024 [MB] (255 MBps) [2024-11-20T13:01:59.942Z] Copying: 523/1024 [MB] (268 MBps) [2024-11-20T13:02:01.325Z] Copying: 763/1024 [MB] (240 MBps) [2024-11-20T13:02:01.584Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:29:36.065 00:29:36.065 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:36.065 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:36.066 Calculate MD5 checksum, iteration 2 00:29:36.066 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:36.066 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:36.066 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:36.066 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:36.066 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:36.066 13:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:36.066 [2024-11-20 13:02:01.542625] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:29:36.066 [2024-11-20 13:02:01.542714] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82915 ] 00:29:36.324 [2024-11-20 13:02:01.690236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.324 [2024-11-20 13:02:01.763916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.699  [2024-11-20T13:02:03.789Z] Copying: 786/1024 [MB] (786 MBps) [2024-11-20T13:02:04.728Z] Copying: 1024/1024 [MB] (average 753 MBps) 00:29:39.209 00:29:39.209 13:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:39.209 13:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:41.111 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:41.111 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d61378887a81b3f6384f037971e5b227 00:29:41.111 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:41.111 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:41.111 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:41.111 [2024-11-20 13:02:06.556786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.111 [2024-11-20 13:02:06.556950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:41.111 [2024-11-20 13:02:06.556968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:41.111 [2024-11-20 13:02:06.556976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.111 [2024-11-20 13:02:06.557001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.111 [2024-11-20 13:02:06.557009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:41.111 [2024-11-20 13:02:06.557016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.111 [2024-11-20 13:02:06.557027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.111 [2024-11-20 13:02:06.557044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.111 [2024-11-20 13:02:06.557051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:41.111 [2024-11-20 13:02:06.557057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:41.111 [2024-11-20 13:02:06.557064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.111 [2024-11-20 13:02:06.557119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.322 ms, result 0 00:29:41.111 true 00:29:41.111 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:41.369 { 00:29:41.369 "name": "ftl", 00:29:41.369 "properties": [ 00:29:41.369 { 00:29:41.369 "name": "superblock_version", 00:29:41.369 "value": 5, 00:29:41.369 "read-only": true 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "name": "base_device", 00:29:41.369 "bands": [ 00:29:41.369 { 00:29:41.369 "id": 0, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 1, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 2, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 3, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 4, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 5, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 6, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 7, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 8, 00:29:41.369 "state": "FREE", 00:29:41.369 "validity": 0.0 00:29:41.369 }, 00:29:41.369 { 00:29:41.369 "id": 9, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 10, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 11, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 12, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 13, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 14, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 15, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 16, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 17, 00:29:41.370 "state": "FREE", 00:29:41.370 "validity": 0.0 00:29:41.370 } 00:29:41.370 ], 00:29:41.370 "read-only": true 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "name": "cache_device", 00:29:41.370 "type": "bdev", 00:29:41.370 "chunks": [ 00:29:41.370 { 00:29:41.370 "id": 0, 00:29:41.370 "state": "INACTIVE", 00:29:41.370 "utilization": 0.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 1, 00:29:41.370 "state": "CLOSED", 00:29:41.370 "utilization": 1.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 2, 00:29:41.370 "state": "CLOSED", 00:29:41.370 "utilization": 1.0 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 3, 00:29:41.370 "state": "OPEN", 00:29:41.370 "utilization": 0.001953125 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "id": 4, 00:29:41.370 "state": "OPEN", 00:29:41.370 "utilization": 0.0 00:29:41.370 } 00:29:41.370 ], 00:29:41.370 "read-only": true 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "name": "verbose_mode", 00:29:41.370 "value": true, 00:29:41.370 "unit": "", 00:29:41.370 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:41.370 }, 00:29:41.370 { 00:29:41.370 "name": "prep_upgrade_on_shutdown", 00:29:41.370 "value": false, 00:29:41.370 "unit": "", 00:29:41.370 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:41.370 } 00:29:41.370 ] 00:29:41.370 } 00:29:41.370 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:41.628 [2024-11-20 13:02:06.925036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.628 [2024-11-20 13:02:06.925155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:41.628 [2024-11-20 13:02:06.925200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.628 [2024-11-20 13:02:06.925219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.628 [2024-11-20 13:02:06.925249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.628 [2024-11-20 13:02:06.925266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:41.628 [2024-11-20 13:02:06.925282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:41.628 [2024-11-20 13:02:06.925296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.628 [2024-11-20 13:02:06.925320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.628 [2024-11-20 13:02:06.925337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:41.628 [2024-11-20 13:02:06.925353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:41.628 [2024-11-20 13:02:06.925399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.628 [2024-11-20 13:02:06.925460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.405 ms, result 0 00:29:41.628 true 00:29:41.628 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:41.628 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:41.628 13:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:41.628 13:02:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:41.628 13:02:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:41.628 13:02:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:41.886 [2024-11-20 13:02:07.281349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.886 [2024-11-20 13:02:07.281445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:41.886 [2024-11-20 13:02:07.281457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.886 [2024-11-20 13:02:07.281464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.886 [2024-11-20 13:02:07.281482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.886 [2024-11-20 13:02:07.281488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:41.886 [2024-11-20 13:02:07.281495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:41.886 [2024-11-20 13:02:07.281500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.886 [2024-11-20 13:02:07.281514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.886 [2024-11-20 13:02:07.281521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:41.886 [2024-11-20 13:02:07.281526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:41.886 [2024-11-20 13:02:07.281531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.886 [2024-11-20 13:02:07.281573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.211 ms, result 0 00:29:41.886 true 00:29:41.886 13:02:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:42.144 { 00:29:42.144 "name": "ftl", 00:29:42.144 "properties": [ 00:29:42.144 { 00:29:42.144 "name": "superblock_version", 00:29:42.144 "value": 5, 00:29:42.144 "read-only": true 00:29:42.144 }, 00:29:42.144 { 00:29:42.144 "name": "base_device", 00:29:42.144 "bands": [ 00:29:42.144 { 00:29:42.144 "id": 0, 00:29:42.144 "state": "FREE", 00:29:42.144 "validity": 0.0 00:29:42.144 }, 00:29:42.144 { 00:29:42.144 "id": 1, 00:29:42.144 "state": "FREE", 00:29:42.144 "validity": 0.0 00:29:42.144 }, 00:29:42.144 { 00:29:42.144 "id": 2, 00:29:42.144 "state": "FREE", 00:29:42.144 "validity": 0.0 00:29:42.144 }, 00:29:42.144 { 00:29:42.144 "id": 3, 00:29:42.144 "state": "FREE", 00:29:42.144 "validity": 0.0 00:29:42.144 }, 00:29:42.144 { 00:29:42.145 "id": 4, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 5, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 6, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 7, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 8, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 9, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 10, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 11, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 12, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 13, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 14, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 15, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 16, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 17, 00:29:42.145 "state": "FREE", 00:29:42.145 "validity": 0.0 00:29:42.145 } 00:29:42.145 ], 00:29:42.145 "read-only": true 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "name": "cache_device", 00:29:42.145 "type": "bdev", 00:29:42.145 "chunks": [ 00:29:42.145 { 00:29:42.145 "id": 0, 00:29:42.145 "state": "INACTIVE", 00:29:42.145 "utilization": 0.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 1, 00:29:42.145 "state": "CLOSED", 00:29:42.145 "utilization": 1.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 2, 00:29:42.145 "state": "CLOSED", 00:29:42.145 "utilization": 1.0 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 3, 00:29:42.145 "state": "OPEN", 00:29:42.145 "utilization": 0.001953125 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "id": 4, 00:29:42.145 "state": "OPEN", 00:29:42.145 "utilization": 0.0 00:29:42.145 } 00:29:42.145 ], 00:29:42.145 "read-only": true 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "name": "verbose_mode", 00:29:42.145 "value": true, 00:29:42.145 "unit": "", 00:29:42.145 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:42.145 }, 00:29:42.145 { 00:29:42.145 "name": "prep_upgrade_on_shutdown", 00:29:42.145 "value": true, 00:29:42.145 "unit": "", 00:29:42.145 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:42.145 } 00:29:42.145 ] 00:29:42.145 } 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82581 ]] 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82581 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82581 ']' 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82581 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82581 00:29:42.145 killing process with pid 82581 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82581' 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82581 00:29:42.145 13:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82581 00:29:42.714 [2024-11-20 13:02:08.054725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:42.714 [2024-11-20 13:02:08.065079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.714 [2024-11-20 13:02:08.065114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:42.714 [2024-11-20 13:02:08.065126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:42.714 [2024-11-20 13:02:08.065133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.714 [2024-11-20 13:02:08.065152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:42.714 [2024-11-20 13:02:08.067292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.714 [2024-11-20 13:02:08.067316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:42.714 [2024-11-20 13:02:08.067325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.128 ms 00:29:42.714 [2024-11-20 13:02:08.067332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.843 [2024-11-20 13:02:15.858302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.843 [2024-11-20 13:02:15.858359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:50.843 [2024-11-20 13:02:15.858372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7790.920 ms 00:29:50.843 [2024-11-20 13:02:15.858380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.843 [2024-11-20 13:02:15.859439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.843 [2024-11-20 13:02:15.859459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:50.843 [2024-11-20 13:02:15.859467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.043 ms 00:29:50.843 [2024-11-20 13:02:15.859474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.860367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.860379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:50.844 [2024-11-20 13:02:15.860387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:29:50.844 [2024-11-20 13:02:15.860394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.868474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.868575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:50.844 [2024-11-20 13:02:15.868624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.045 ms 00:29:50.844 [2024-11-20 13:02:15.868642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.874041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.874139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:50.844 [2024-11-20 13:02:15.874186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.367 ms 00:29:50.844 [2024-11-20 13:02:15.874205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.874276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.874296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:50.844 [2024-11-20 13:02:15.874312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:50.844 [2024-11-20 13:02:15.874331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.881405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.881495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:50.844 [2024-11-20 13:02:15.881539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.054 ms 00:29:50.844 [2024-11-20 13:02:15.881555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.889094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.889183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:50.844 [2024-11-20 13:02:15.889227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.508 ms 00:29:50.844 [2024-11-20 13:02:15.889244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.896419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.896506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:50.844 [2024-11-20 13:02:15.896647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.146 ms 00:29:50.844 [2024-11-20 13:02:15.896665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.904315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.904395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:50.844 [2024-11-20 13:02:15.904431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.595 ms 00:29:50.844 [2024-11-20 13:02:15.904448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.904477] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:50.844 [2024-11-20 13:02:15.904498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.844 [2024-11-20 13:02:15.904522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.844 [2024-11-20 13:02:15.904553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:50.844 [2024-11-20 13:02:15.904576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.904954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.905002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.905074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.905083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.905089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:50.844 [2024-11-20 13:02:15.905098] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:50.844 [2024-11-20 13:02:15.905105] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0f9352ec-5ea0-423d-a893-6fc813d38d02 00:29:50.844 [2024-11-20 13:02:15.905112] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:50.844 [2024-11-20 13:02:15.905118] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:50.844 [2024-11-20 13:02:15.905124] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:50.844 [2024-11-20 13:02:15.905131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:50.844 [2024-11-20 13:02:15.905136] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:50.844 [2024-11-20 13:02:15.905143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:50.844 [2024-11-20 13:02:15.905152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:50.844 [2024-11-20 13:02:15.905157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:50.844 [2024-11-20 13:02:15.905163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:50.844 [2024-11-20 13:02:15.905170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.905176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:50.844 [2024-11-20 13:02:15.905186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:29:50.844 [2024-11-20 13:02:15.905193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.915703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.915728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:50.844 [2024-11-20 13:02:15.915736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.486 ms 00:29:50.844 [2024-11-20 13:02:15.915752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.916056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.844 [2024-11-20 13:02:15.916065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:50.844 [2024-11-20 13:02:15.916072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:29:50.844 [2024-11-20 13:02:15.916078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.951060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:15.951089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.844 [2024-11-20 13:02:15.951098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.844 [2024-11-20 13:02:15.951109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.951133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:15.951140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.844 [2024-11-20 13:02:15.951147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.844 [2024-11-20 13:02:15.951153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.951204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:15.951218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.844 [2024-11-20 13:02:15.951224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.844 [2024-11-20 13:02:15.951230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:15.951246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:15.951253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.844 [2024-11-20 13:02:15.951259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.844 [2024-11-20 13:02:15.951266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:16.014259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:16.014419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.844 [2024-11-20 13:02:16.014433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.844 [2024-11-20 13:02:16.014441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:16.066091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:16.066128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.844 [2024-11-20 13:02:16.066137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.844 [2024-11-20 13:02:16.066144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:16.066214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:16.066222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:50.844 [2024-11-20 13:02:16.066229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.844 [2024-11-20 13:02:16.066236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.844 [2024-11-20 13:02:16.066289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.844 [2024-11-20 13:02:16.066303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:50.845 [2024-11-20 13:02:16.066310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.845 [2024-11-20 13:02:16.066316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.845 [2024-11-20 13:02:16.066390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.845 [2024-11-20 13:02:16.066399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:50.845 [2024-11-20 13:02:16.066405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.845 [2024-11-20 13:02:16.066412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.845 [2024-11-20 13:02:16.066438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.845 [2024-11-20 13:02:16.066446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:50.845 [2024-11-20 13:02:16.066455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.845 [2024-11-20 13:02:16.066462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.845 [2024-11-20 13:02:16.066499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.845 [2024-11-20 13:02:16.066506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:50.845 [2024-11-20 13:02:16.066514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.845 [2024-11-20 13:02:16.066521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.845 [2024-11-20 13:02:16.066564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.845 [2024-11-20 13:02:16.066575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:50.845 [2024-11-20 13:02:16.066583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.845 [2024-11-20 13:02:16.066589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.845 [2024-11-20 13:02:16.066700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8001.569 ms, result 0 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83094 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83094 00:29:55.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83094 ']' 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:55.124 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:55.124 [2024-11-20 13:02:20.371851] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:29:55.124 [2024-11-20 13:02:20.372148] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83094 ] 00:29:55.124 [2024-11-20 13:02:20.527323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.124 [2024-11-20 13:02:20.613021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.060 [2024-11-20 13:02:21.240078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:56.060 [2024-11-20 13:02:21.240294] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:56.060 [2024-11-20 13:02:21.388863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.388897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:56.060 [2024-11-20 13:02:21.388909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:56.060 [2024-11-20 13:02:21.388916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.388956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.388964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:56.060 [2024-11-20 13:02:21.388970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:56.060 [2024-11-20 13:02:21.388976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.388994] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:56.060 [2024-11-20 13:02:21.389550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:56.060 [2024-11-20 13:02:21.389564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.389570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:56.060 [2024-11-20 13:02:21.389577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:29:56.060 [2024-11-20 13:02:21.389583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.390893] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:56.060 [2024-11-20 13:02:21.401622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.401652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:56.060 [2024-11-20 13:02:21.401667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.730 ms 00:29:56.060 [2024-11-20 13:02:21.401675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.401723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.401731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:56.060 [2024-11-20 13:02:21.401751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:56.060 [2024-11-20 13:02:21.401757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.408101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.408131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:56.060 [2024-11-20 13:02:21.408138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.298 ms 00:29:56.060 [2024-11-20 13:02:21.408144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.408190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.408197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:56.060 [2024-11-20 13:02:21.408204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:56.060 [2024-11-20 13:02:21.408209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.408255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.408263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:56.060 [2024-11-20 13:02:21.408273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:56.060 [2024-11-20 13:02:21.408279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.408296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:56.060 [2024-11-20 13:02:21.411243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.411265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:56.060 [2024-11-20 13:02:21.411273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.950 ms 00:29:56.060 [2024-11-20 13:02:21.411281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.411305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.060 [2024-11-20 13:02:21.411311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:56.060 [2024-11-20 13:02:21.411318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:56.060 [2024-11-20 13:02:21.411324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.060 [2024-11-20 13:02:21.411340] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:56.060 [2024-11-20 13:02:21.411357] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:56.060 [2024-11-20 13:02:21.411387] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:56.060 [2024-11-20 13:02:21.411399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:56.060 [2024-11-20 13:02:21.411480] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:56.060 [2024-11-20 13:02:21.411489] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:56.060 [2024-11-20 13:02:21.411497] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:56.060 [2024-11-20 13:02:21.411504] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:56.060 [2024-11-20 13:02:21.411511] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:56.060 [2024-11-20 13:02:21.411520] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:56.060 [2024-11-20 13:02:21.411527] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:56.060 [2024-11-20 13:02:21.411532] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:56.060 [2024-11-20 13:02:21.411538] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:56.061 [2024-11-20 13:02:21.411545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.061 [2024-11-20 13:02:21.411550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:56.061 [2024-11-20 13:02:21.411556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:29:56.061 [2024-11-20 13:02:21.411561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.061 [2024-11-20 13:02:21.411626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.061 [2024-11-20 13:02:21.411633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:56.061 [2024-11-20 13:02:21.411639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:56.061 [2024-11-20 13:02:21.411646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.061 [2024-11-20 13:02:21.411722] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:56.061 [2024-11-20 13:02:21.411730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:56.061 [2024-11-20 13:02:21.411749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.061 [2024-11-20 13:02:21.411756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:56.061 [2024-11-20 13:02:21.411767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:56.061 [2024-11-20 13:02:21.411779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:56.061 [2024-11-20 13:02:21.411785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:56.061 [2024-11-20 13:02:21.411791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:56.061 [2024-11-20 13:02:21.411808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:56.061 [2024-11-20 13:02:21.411813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:56.061 [2024-11-20 13:02:21.411824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:56.061 [2024-11-20 13:02:21.411830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:56.061 [2024-11-20 13:02:21.411840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:56.061 [2024-11-20 13:02:21.411845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:56.061 [2024-11-20 13:02:21.411855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:56.061 [2024-11-20 13:02:21.411860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.061 [2024-11-20 13:02:21.411866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:56.061 [2024-11-20 13:02:21.411872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:56.061 [2024-11-20 13:02:21.411877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.061 [2024-11-20 13:02:21.411888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:56.061 [2024-11-20 13:02:21.411903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:56.061 [2024-11-20 13:02:21.411908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.061 [2024-11-20 13:02:21.411913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:56.061 [2024-11-20 13:02:21.411918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:56.061 [2024-11-20 13:02:21.411923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.061 [2024-11-20 13:02:21.411929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:56.061 [2024-11-20 13:02:21.411934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:56.061 [2024-11-20 13:02:21.411939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:56.061 [2024-11-20 13:02:21.411949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:56.061 [2024-11-20 13:02:21.411954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:56.061 [2024-11-20 13:02:21.411965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:56.061 [2024-11-20 13:02:21.411980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:56.061 [2024-11-20 13:02:21.411985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.411992] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:56.061 [2024-11-20 13:02:21.412011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:56.061 [2024-11-20 13:02:21.412017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.061 [2024-11-20 13:02:21.412023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.061 [2024-11-20 13:02:21.412030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:56.061 [2024-11-20 13:02:21.412035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:56.061 [2024-11-20 13:02:21.412040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:56.061 [2024-11-20 13:02:21.412047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:56.061 [2024-11-20 13:02:21.412052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:56.061 [2024-11-20 13:02:21.412057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:56.061 [2024-11-20 13:02:21.412064] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:56.061 [2024-11-20 13:02:21.412071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:56.061 [2024-11-20 13:02:21.412083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:56.061 [2024-11-20 13:02:21.412102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:56.061 [2024-11-20 13:02:21.412107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:56.061 [2024-11-20 13:02:21.412113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:56.061 [2024-11-20 13:02:21.412119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:56.061 [2024-11-20 13:02:21.412156] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:56.061 [2024-11-20 13:02:21.412164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:56.061 [2024-11-20 13:02:21.412176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:56.061 [2024-11-20 13:02:21.412181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:56.061 [2024-11-20 13:02:21.412186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:56.061 [2024-11-20 13:02:21.412195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.061 [2024-11-20 13:02:21.412201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:56.061 [2024-11-20 13:02:21.412208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:29:56.061 [2024-11-20 13:02:21.412214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.061 [2024-11-20 13:02:21.412257] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:56.061 [2024-11-20 13:02:21.412266] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:00.270 [2024-11-20 13:02:25.309470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.270 [2024-11-20 13:02:25.309571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:00.270 [2024-11-20 13:02:25.309592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3897.193 ms 00:30:00.270 [2024-11-20 13:02:25.309605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.270 [2024-11-20 13:02:25.346529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.270 [2024-11-20 13:02:25.346601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:00.270 [2024-11-20 13:02:25.346618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.638 ms 00:30:00.270 [2024-11-20 13:02:25.346627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.270 [2024-11-20 13:02:25.346732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.270 [2024-11-20 13:02:25.346773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:00.270 [2024-11-20 13:02:25.346784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:00.270 [2024-11-20 13:02:25.346794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.270 [2024-11-20 13:02:25.387042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.270 [2024-11-20 13:02:25.387101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:00.270 [2024-11-20 13:02:25.387115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.206 ms 00:30:00.270 [2024-11-20 13:02:25.387129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.270 [2024-11-20 13:02:25.387180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.270 [2024-11-20 13:02:25.387190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:00.271 [2024-11-20 13:02:25.387200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:00.271 [2024-11-20 13:02:25.387209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.388015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.388055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:00.271 [2024-11-20 13:02:25.388069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.736 ms 00:30:00.271 [2024-11-20 13:02:25.388080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.388150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.388162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:00.271 [2024-11-20 13:02:25.388174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:00.271 [2024-11-20 13:02:25.388185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.408980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.409029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:00.271 [2024-11-20 13:02:25.409042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.770 ms 00:30:00.271 [2024-11-20 13:02:25.409051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.424453] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:00.271 [2024-11-20 13:02:25.424508] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:00.271 [2024-11-20 13:02:25.424524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.424535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:00.271 [2024-11-20 13:02:25.424546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.343 ms 00:30:00.271 [2024-11-20 13:02:25.424554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.439583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.439636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:00.271 [2024-11-20 13:02:25.439649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.970 ms 00:30:00.271 [2024-11-20 13:02:25.439659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.452529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.452575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:00.271 [2024-11-20 13:02:25.452588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.808 ms 00:30:00.271 [2024-11-20 13:02:25.452597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.465331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.465378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:00.271 [2024-11-20 13:02:25.465390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.681 ms 00:30:00.271 [2024-11-20 13:02:25.465398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.466149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.466183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:00.271 [2024-11-20 13:02:25.466195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.625 ms 00:30:00.271 [2024-11-20 13:02:25.466204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.551616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.551690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:00.271 [2024-11-20 13:02:25.551707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.386 ms 00:30:00.271 [2024-11-20 13:02:25.551716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.564870] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:00.271 [2024-11-20 13:02:25.566170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.566217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:00.271 [2024-11-20 13:02:25.566229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.370 ms 00:30:00.271 [2024-11-20 13:02:25.566239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.566337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.566353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:00.271 [2024-11-20 13:02:25.566364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:00.271 [2024-11-20 13:02:25.566373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.566440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.566453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:00.271 [2024-11-20 13:02:25.566464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:00.271 [2024-11-20 13:02:25.566473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.566498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.566508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:00.271 [2024-11-20 13:02:25.566517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:00.271 [2024-11-20 13:02:25.566529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.566573] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:00.271 [2024-11-20 13:02:25.566587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.566596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:00.271 [2024-11-20 13:02:25.566605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:00.271 [2024-11-20 13:02:25.566613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.592500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.592559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:00.271 [2024-11-20 13:02:25.592573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.864 ms 00:30:00.271 [2024-11-20 13:02:25.592583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.592684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.271 [2024-11-20 13:02:25.592696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:00.271 [2024-11-20 13:02:25.592706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:30:00.271 [2024-11-20 13:02:25.592714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.271 [2024-11-20 13:02:25.594228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4204.724 ms, result 0 00:30:00.271 [2024-11-20 13:02:25.608980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.271 [2024-11-20 13:02:25.624990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:00.271 [2024-11-20 13:02:25.633216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:00.840 13:02:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.840 13:02:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:00.840 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:00.840 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:00.840 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:01.099 [2024-11-20 13:02:26.526769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.099 [2024-11-20 13:02:26.526808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:01.099 [2024-11-20 13:02:26.526819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:01.099 [2024-11-20 13:02:26.526829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.099 [2024-11-20 13:02:26.526847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.099 [2024-11-20 13:02:26.526855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:01.099 [2024-11-20 13:02:26.526862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:01.099 [2024-11-20 13:02:26.526870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.099 [2024-11-20 13:02:26.526885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:01.099 [2024-11-20 13:02:26.526893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:01.099 [2024-11-20 13:02:26.526899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:01.099 [2024-11-20 13:02:26.526905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:01.099 [2024-11-20 13:02:26.526951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.184 ms, result 0 00:30:01.099 true 00:30:01.099 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.356 { 00:30:01.356 "name": "ftl", 00:30:01.356 "properties": [ 00:30:01.356 { 00:30:01.356 "name": "superblock_version", 00:30:01.356 "value": 5, 00:30:01.356 "read-only": true 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "name": "base_device", 00:30:01.356 "bands": [ 00:30:01.356 { 00:30:01.356 "id": 0, 00:30:01.356 "state": "CLOSED", 00:30:01.356 "validity": 1.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 1, 00:30:01.356 "state": "CLOSED", 00:30:01.356 "validity": 1.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 2, 00:30:01.356 "state": "CLOSED", 00:30:01.356 "validity": 0.007843137254901933 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 3, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 4, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 5, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 6, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 7, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 8, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 9, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 10, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 11, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 12, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 13, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 14, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 15, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 16, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 17, 00:30:01.356 "state": "FREE", 00:30:01.356 "validity": 0.0 00:30:01.356 } 00:30:01.356 ], 00:30:01.356 "read-only": true 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "name": "cache_device", 00:30:01.356 "type": "bdev", 00:30:01.356 "chunks": [ 00:30:01.356 { 00:30:01.356 "id": 0, 00:30:01.356 "state": "INACTIVE", 00:30:01.356 "utilization": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 1, 00:30:01.356 "state": "OPEN", 00:30:01.356 "utilization": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 2, 00:30:01.356 "state": "OPEN", 00:30:01.356 "utilization": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 3, 00:30:01.356 "state": "FREE", 00:30:01.356 "utilization": 0.0 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "id": 4, 00:30:01.356 "state": "FREE", 00:30:01.356 "utilization": 0.0 00:30:01.356 } 00:30:01.356 ], 00:30:01.356 "read-only": true 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "name": "verbose_mode", 00:30:01.356 "value": true, 00:30:01.356 "unit": "", 00:30:01.356 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:01.356 }, 00:30:01.356 { 00:30:01.356 "name": "prep_upgrade_on_shutdown", 00:30:01.356 "value": false, 00:30:01.356 "unit": "", 00:30:01.356 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:01.356 } 00:30:01.356 ] 00:30:01.356 } 00:30:01.356 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:01.356 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.356 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:01.615 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:01.615 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:01.615 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:01.615 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:01.615 13:02:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.873 Validate MD5 checksum, iteration 1 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:01.873 13:02:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:01.873 [2024-11-20 13:02:27.237609] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:30:01.873 [2024-11-20 13:02:27.237876] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83187 ] 00:30:02.131 [2024-11-20 13:02:27.398122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.131 [2024-11-20 13:02:27.492904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.514  [2024-11-20T13:02:29.974Z] Copying: 531/1024 [MB] (531 MBps) [2024-11-20T13:02:31.356Z] Copying: 1024/1024 [MB] (average 528 MBps) 00:30:05.837 00:30:05.837 13:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:05.837 13:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=442d189700081eb67e8e024710359734 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 442d189700081eb67e8e024710359734 != \4\4\2\d\1\8\9\7\0\0\0\8\1\e\b\6\7\e\8\e\0\2\4\7\1\0\3\5\9\7\3\4 ]] 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:08.368 Validate MD5 checksum, iteration 2 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:08.368 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:08.368 [2024-11-20 13:02:33.515183] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:30:08.368 [2024-11-20 13:02:33.515890] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83254 ] 00:30:08.368 [2024-11-20 13:02:33.672925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.368 [2024-11-20 13:02:33.747105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.750  [2024-11-20T13:02:35.840Z] Copying: 679/1024 [MB] (679 MBps) [2024-11-20T13:02:36.776Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:30:11.257 00:30:11.257 13:02:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:11.257 13:02:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d61378887a81b3f6384f037971e5b227 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d61378887a81b3f6384f037971e5b227 != \d\6\1\3\7\8\8\8\7\a\8\1\b\3\f\6\3\8\4\f\0\3\7\9\7\1\e\5\b\2\2\7 ]] 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83094 ]] 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83094 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83310 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83310 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83310 ']' 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.159 13:02:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:13.159 [2024-11-20 13:02:38.374588] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:30:13.159 [2024-11-20 13:02:38.374878] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83310 ] 00:30:13.159 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83094 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:13.159 [2024-11-20 13:02:38.530329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.159 [2024-11-20 13:02:38.620903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.095 [2024-11-20 13:02:39.245794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:14.095 [2024-11-20 13:02:39.245853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:14.095 [2024-11-20 13:02:39.394530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.095 [2024-11-20 13:02:39.394570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:14.095 [2024-11-20 13:02:39.394582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:14.095 [2024-11-20 13:02:39.394589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.095 [2024-11-20 13:02:39.394631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.095 [2024-11-20 13:02:39.394639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:14.096 [2024-11-20 13:02:39.394645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:14.096 [2024-11-20 13:02:39.394652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.394670] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:14.096 [2024-11-20 13:02:39.395283] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:14.096 [2024-11-20 13:02:39.395304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.395311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:14.096 [2024-11-20 13:02:39.395318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.640 ms 00:30:14.096 [2024-11-20 13:02:39.395324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.395544] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:14.096 [2024-11-20 13:02:39.409663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.409693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:14.096 [2024-11-20 13:02:39.409704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.120 ms 00:30:14.096 [2024-11-20 13:02:39.409710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.416680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.416706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:14.096 [2024-11-20 13:02:39.416716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:14.096 [2024-11-20 13:02:39.416722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.417013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.417023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:14.096 [2024-11-20 13:02:39.417030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.218 ms 00:30:14.096 [2024-11-20 13:02:39.417037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.417077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.417086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:14.096 [2024-11-20 13:02:39.417093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:14.096 [2024-11-20 13:02:39.417099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.417117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.417124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:14.096 [2024-11-20 13:02:39.417130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:14.096 [2024-11-20 13:02:39.417136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.417152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:14.096 [2024-11-20 13:02:39.419596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.419619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:14.096 [2024-11-20 13:02:39.419626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.448 ms 00:30:14.096 [2024-11-20 13:02:39.419633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.419655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.419662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:14.096 [2024-11-20 13:02:39.419668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:14.096 [2024-11-20 13:02:39.419674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.419691] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:14.096 [2024-11-20 13:02:39.419708] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:14.096 [2024-11-20 13:02:39.419748] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:14.096 [2024-11-20 13:02:39.419763] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:14.096 [2024-11-20 13:02:39.419848] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:14.096 [2024-11-20 13:02:39.419856] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:14.096 [2024-11-20 13:02:39.419864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:14.096 [2024-11-20 13:02:39.419873] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:14.096 [2024-11-20 13:02:39.419880] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:14.096 [2024-11-20 13:02:39.419887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:14.096 [2024-11-20 13:02:39.419893] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:14.096 [2024-11-20 13:02:39.419907] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:14.096 [2024-11-20 13:02:39.419913] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:14.096 [2024-11-20 13:02:39.419919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.419927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:14.096 [2024-11-20 13:02:39.419934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:30:14.096 [2024-11-20 13:02:39.419940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.420005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.096 [2024-11-20 13:02:39.420012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:14.096 [2024-11-20 13:02:39.420019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:14.096 [2024-11-20 13:02:39.420026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.096 [2024-11-20 13:02:39.420103] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:14.096 [2024-11-20 13:02:39.420112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:14.096 [2024-11-20 13:02:39.420121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:14.096 [2024-11-20 13:02:39.420128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.096 [2024-11-20 13:02:39.420134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:14.096 [2024-11-20 13:02:39.420140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:14.096 [2024-11-20 13:02:39.420146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:14.096 [2024-11-20 13:02:39.420151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:14.096 [2024-11-20 13:02:39.420159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:14.096 [2024-11-20 13:02:39.420165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.096 [2024-11-20 13:02:39.420170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:14.096 [2024-11-20 13:02:39.420175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:14.096 [2024-11-20 13:02:39.420181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.096 [2024-11-20 13:02:39.420186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:14.096 [2024-11-20 13:02:39.420191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:14.096 [2024-11-20 13:02:39.420196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.096 [2024-11-20 13:02:39.420201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:14.096 [2024-11-20 13:02:39.420207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:14.096 [2024-11-20 13:02:39.420212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.096 [2024-11-20 13:02:39.420217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:14.096 [2024-11-20 13:02:39.420223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:14.096 [2024-11-20 13:02:39.420228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.096 [2024-11-20 13:02:39.420233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:14.096 [2024-11-20 13:02:39.420243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:14.096 [2024-11-20 13:02:39.420248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.096 [2024-11-20 13:02:39.420253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:14.097 [2024-11-20 13:02:39.420258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:14.097 [2024-11-20 13:02:39.420263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.097 [2024-11-20 13:02:39.420268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:14.097 [2024-11-20 13:02:39.420273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:14.097 [2024-11-20 13:02:39.420278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:14.097 [2024-11-20 13:02:39.420284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:14.097 [2024-11-20 13:02:39.420289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:14.097 [2024-11-20 13:02:39.420294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.097 [2024-11-20 13:02:39.420299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:14.097 [2024-11-20 13:02:39.420305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:14.097 [2024-11-20 13:02:39.420310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.097 [2024-11-20 13:02:39.420316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:14.097 [2024-11-20 13:02:39.420320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:14.097 [2024-11-20 13:02:39.420325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.097 [2024-11-20 13:02:39.420332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:14.097 [2024-11-20 13:02:39.420338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:14.097 [2024-11-20 13:02:39.420343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.097 [2024-11-20 13:02:39.420349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:14.097 [2024-11-20 13:02:39.420355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:14.097 [2024-11-20 13:02:39.420361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:14.097 [2024-11-20 13:02:39.420366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:14.097 [2024-11-20 13:02:39.420372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:14.097 [2024-11-20 13:02:39.420378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:14.097 [2024-11-20 13:02:39.420383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:14.097 [2024-11-20 13:02:39.420388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:14.097 [2024-11-20 13:02:39.420394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:14.097 [2024-11-20 13:02:39.420399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:14.097 [2024-11-20 13:02:39.420406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:14.097 [2024-11-20 13:02:39.420414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:14.097 [2024-11-20 13:02:39.420426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:14.097 [2024-11-20 13:02:39.420445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:14.097 [2024-11-20 13:02:39.420451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:14.097 [2024-11-20 13:02:39.420457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:14.097 [2024-11-20 13:02:39.420463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:14.097 [2024-11-20 13:02:39.420503] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:14.097 [2024-11-20 13:02:39.420510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:14.097 [2024-11-20 13:02:39.420524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:14.097 [2024-11-20 13:02:39.420530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:14.097 [2024-11-20 13:02:39.420535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:14.097 [2024-11-20 13:02:39.420542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.420550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:14.097 [2024-11-20 13:02:39.420556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.492 ms 00:30:14.097 [2024-11-20 13:02:39.420562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.442256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.442282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:14.097 [2024-11-20 13:02:39.442291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.655 ms 00:30:14.097 [2024-11-20 13:02:39.442297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.442326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.442333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:14.097 [2024-11-20 13:02:39.442340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:14.097 [2024-11-20 13:02:39.442346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.468728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.468761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:14.097 [2024-11-20 13:02:39.468770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.342 ms 00:30:14.097 [2024-11-20 13:02:39.468776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.468799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.468806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:14.097 [2024-11-20 13:02:39.468812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:14.097 [2024-11-20 13:02:39.468818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.468894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.468902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:14.097 [2024-11-20 13:02:39.468909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:14.097 [2024-11-20 13:02:39.468916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.468949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.468957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:14.097 [2024-11-20 13:02:39.468963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:14.097 [2024-11-20 13:02:39.468969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.482177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.482201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:14.097 [2024-11-20 13:02:39.482209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.190 ms 00:30:14.097 [2024-11-20 13:02:39.482216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.097 [2024-11-20 13:02:39.482295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.097 [2024-11-20 13:02:39.482304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:14.098 [2024-11-20 13:02:39.482311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:14.098 [2024-11-20 13:02:39.482316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.510489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.510541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:14.098 [2024-11-20 13:02:39.510559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.156 ms 00:30:14.098 [2024-11-20 13:02:39.510571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.518636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.518797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:14.098 [2024-11-20 13:02:39.518817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:30:14.098 [2024-11-20 13:02:39.518823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.566804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.566841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:14.098 [2024-11-20 13:02:39.566856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.935 ms 00:30:14.098 [2024-11-20 13:02:39.566863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.566997] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:14.098 [2024-11-20 13:02:39.567104] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:14.098 [2024-11-20 13:02:39.567206] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:14.098 [2024-11-20 13:02:39.567307] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:14.098 [2024-11-20 13:02:39.567315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.567323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:14.098 [2024-11-20 13:02:39.567331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:30:14.098 [2024-11-20 13:02:39.567337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.567377] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:14.098 [2024-11-20 13:02:39.567386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.567397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:14.098 [2024-11-20 13:02:39.567404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:14.098 [2024-11-20 13:02:39.567410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.579617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.579648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:14.098 [2024-11-20 13:02:39.579657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.190 ms 00:30:14.098 [2024-11-20 13:02:39.579664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.586240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.586366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:14.098 [2024-11-20 13:02:39.586378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:14.098 [2024-11-20 13:02:39.586385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.098 [2024-11-20 13:02:39.586453] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:14.098 [2024-11-20 13:02:39.586611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.098 [2024-11-20 13:02:39.586625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:14.098 [2024-11-20 13:02:39.586633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:30:14.098 [2024-11-20 13:02:39.586640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.666 [2024-11-20 13:02:40.175841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.666 [2024-11-20 13:02:40.176028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:14.666 [2024-11-20 13:02:40.176050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 588.577 ms 00:30:14.666 [2024-11-20 13:02:40.176060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.927 [2024-11-20 13:02:40.192767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.927 [2024-11-20 13:02:40.192803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:14.927 [2024-11-20 13:02:40.192814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.787 ms 00:30:14.927 [2024-11-20 13:02:40.192823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.927 [2024-11-20 13:02:40.193853] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:14.927 [2024-11-20 13:02:40.193887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.927 [2024-11-20 13:02:40.193897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:14.927 [2024-11-20 13:02:40.193907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.024 ms 00:30:14.927 [2024-11-20 13:02:40.193914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.927 [2024-11-20 13:02:40.193947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.927 [2024-11-20 13:02:40.193956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:14.927 [2024-11-20 13:02:40.193965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:14.927 [2024-11-20 13:02:40.193973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.927 [2024-11-20 13:02:40.194012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 607.554 ms, result 0 00:30:14.927 [2024-11-20 13:02:40.194049] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:14.927 [2024-11-20 13:02:40.194252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.927 [2024-11-20 13:02:40.194264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:14.927 [2024-11-20 13:02:40.194272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:30:14.927 [2024-11-20 13:02:40.194279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.806860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.806908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:15.497 [2024-11-20 13:02:40.806920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 611.575 ms 00:30:15.497 [2024-11-20 13:02:40.806929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.810824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.810855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:15.497 [2024-11-20 13:02:40.810865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:30:15.497 [2024-11-20 13:02:40.810873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.811228] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:15.497 [2024-11-20 13:02:40.811256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.811264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:15.497 [2024-11-20 13:02:40.811272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:30:15.497 [2024-11-20 13:02:40.811280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.811329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.811340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:15.497 [2024-11-20 13:02:40.811348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:15.497 [2024-11-20 13:02:40.811356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.811390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 617.335 ms, result 0 00:30:15.497 [2024-11-20 13:02:40.811432] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:15.497 [2024-11-20 13:02:40.811442] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:15.497 [2024-11-20 13:02:40.811453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.811461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:15.497 [2024-11-20 13:02:40.811470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1225.013 ms 00:30:15.497 [2024-11-20 13:02:40.811479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.811507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.811516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:15.497 [2024-11-20 13:02:40.811528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:15.497 [2024-11-20 13:02:40.811535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.823026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:15.497 [2024-11-20 13:02:40.823132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.823143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:15.497 [2024-11-20 13:02:40.823153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.583 ms 00:30:15.497 [2024-11-20 13:02:40.823160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.823868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.823890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:15.497 [2024-11-20 13:02:40.823913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.645 ms 00:30:15.497 [2024-11-20 13:02:40.823920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.826134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.826275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:15.497 [2024-11-20 13:02:40.826289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.197 ms 00:30:15.497 [2024-11-20 13:02:40.826297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.826339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.826348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:15.497 [2024-11-20 13:02:40.826356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:15.497 [2024-11-20 13:02:40.826369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.826474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.826484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:15.497 [2024-11-20 13:02:40.826492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:15.497 [2024-11-20 13:02:40.826500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.826520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.826527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:15.497 [2024-11-20 13:02:40.826536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:15.497 [2024-11-20 13:02:40.826543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.826572] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:15.497 [2024-11-20 13:02:40.826584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.826592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:15.497 [2024-11-20 13:02:40.826600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:15.497 [2024-11-20 13:02:40.826608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.826661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.497 [2024-11-20 13:02:40.826671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:15.497 [2024-11-20 13:02:40.826679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:15.497 [2024-11-20 13:02:40.826687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.497 [2024-11-20 13:02:40.827730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1432.735 ms, result 0 00:30:15.497 [2024-11-20 13:02:40.843471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.497 [2024-11-20 13:02:40.859465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:15.497 Validate MD5 checksum, iteration 1 00:30:15.497 [2024-11-20 13:02:40.867978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:15.497 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:15.498 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:15.498 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:15.498 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:15.498 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:15.498 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:15.498 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:15.498 [2024-11-20 13:02:40.962998] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:30:15.498 [2024-11-20 13:02:40.963250] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83339 ] 00:30:15.756 [2024-11-20 13:02:41.118076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.756 [2024-11-20 13:02:41.192799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.136  [2024-11-20T13:02:43.227Z] Copying: 643/1024 [MB] (643 MBps) [2024-11-20T13:02:44.606Z] Copying: 1024/1024 [MB] (average 643 MBps) 00:30:19.087 00:30:19.087 13:02:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:19.088 13:02:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=442d189700081eb67e8e024710359734 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 442d189700081eb67e8e024710359734 != \4\4\2\d\1\8\9\7\0\0\0\8\1\e\b\6\7\e\8\e\0\2\4\7\1\0\3\5\9\7\3\4 ]] 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:20.989 Validate MD5 checksum, iteration 2 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:20.989 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:20.989 [2024-11-20 13:02:46.429212] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:30:20.989 [2024-11-20 13:02:46.429497] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83407 ] 00:30:21.248 [2024-11-20 13:02:46.589891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.248 [2024-11-20 13:02:46.684484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.698  [2024-11-20T13:02:48.789Z] Copying: 734/1024 [MB] (734 MBps) [2024-11-20T13:02:50.694Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:30:25.175 00:30:25.175 13:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:25.175 13:02:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d61378887a81b3f6384f037971e5b227 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d61378887a81b3f6384f037971e5b227 != \d\6\1\3\7\8\8\8\7\a\8\1\b\3\f\6\3\8\4\f\0\3\7\9\7\1\e\5\b\2\2\7 ]] 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83310 ]] 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83310 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83310 ']' 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83310 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83310 00:30:27.705 killing process with pid 83310 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83310' 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83310 00:30:27.705 13:02:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83310 00:30:27.964 [2024-11-20 13:02:53.368454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:27.964 [2024-11-20 13:02:53.379072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.964 [2024-11-20 13:02:53.379106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:27.964 [2024-11-20 13:02:53.379118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:27.964 [2024-11-20 13:02:53.379124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.964 [2024-11-20 13:02:53.379143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:27.964 [2024-11-20 13:02:53.381371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.964 [2024-11-20 13:02:53.381398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:27.964 [2024-11-20 13:02:53.381407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.217 ms 00:30:27.964 [2024-11-20 13:02:53.381418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.964 [2024-11-20 13:02:53.381609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.381618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:27.965 [2024-11-20 13:02:53.381625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.172 ms 00:30:27.965 [2024-11-20 13:02:53.381632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.382755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.382776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:27.965 [2024-11-20 13:02:53.382783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.110 ms 00:30:27.965 [2024-11-20 13:02:53.382790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.383645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.383663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:27.965 [2024-11-20 13:02:53.383672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.827 ms 00:30:27.965 [2024-11-20 13:02:53.383679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.391363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.391389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:27.965 [2024-11-20 13:02:53.391397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.656 ms 00:30:27.965 [2024-11-20 13:02:53.391407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.395669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.395695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:27.965 [2024-11-20 13:02:53.395703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.234 ms 00:30:27.965 [2024-11-20 13:02:53.395711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.395796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.395806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:27.965 [2024-11-20 13:02:53.395813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:27.965 [2024-11-20 13:02:53.395819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.403191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.403275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:27.965 [2024-11-20 13:02:53.403316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.355 ms 00:30:27.965 [2024-11-20 13:02:53.403334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.410845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.410945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:27.965 [2024-11-20 13:02:53.411004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.478 ms 00:30:27.965 [2024-11-20 13:02:53.411023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.418851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.418936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:27.965 [2024-11-20 13:02:53.418975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.793 ms 00:30:27.965 [2024-11-20 13:02:53.418992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.426728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.426841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:27.965 [2024-11-20 13:02:53.426894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.508 ms 00:30:27.965 [2024-11-20 13:02:53.426913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.426944] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:27.965 [2024-11-20 13:02:53.426967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.965 [2024-11-20 13:02:53.427028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.965 [2024-11-20 13:02:53.427052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:27.965 [2024-11-20 13:02:53.427075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.965 [2024-11-20 13:02:53.427910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:27.965 [2024-11-20 13:02:53.427926] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0f9352ec-5ea0-423d-a893-6fc813d38d02 00:30:27.965 [2024-11-20 13:02:53.427948] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:27.965 [2024-11-20 13:02:53.427982] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:27.965 [2024-11-20 13:02:53.428042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:27.965 [2024-11-20 13:02:53.428060] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:27.965 [2024-11-20 13:02:53.428101] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:27.965 [2024-11-20 13:02:53.428118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:27.965 [2024-11-20 13:02:53.428132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:27.965 [2024-11-20 13:02:53.428168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:27.965 [2024-11-20 13:02:53.428185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:27.965 [2024-11-20 13:02:53.428200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.428219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:27.965 [2024-11-20 13:02:53.428255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.256 ms 00:30:27.965 [2024-11-20 13:02:53.428272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.438576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.438661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:27.965 [2024-11-20 13:02:53.438704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.255 ms 00:30:27.965 [2024-11-20 13:02:53.438721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.439040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.965 [2024-11-20 13:02:53.439133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:27.965 [2024-11-20 13:02:53.439171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:30:27.965 [2024-11-20 13:02:53.439189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.474199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.965 [2024-11-20 13:02:53.474286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.965 [2024-11-20 13:02:53.474395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.965 [2024-11-20 13:02:53.474414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.474454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.965 [2024-11-20 13:02:53.474472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.965 [2024-11-20 13:02:53.474487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.965 [2024-11-20 13:02:53.474502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.474583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.965 [2024-11-20 13:02:53.474653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.965 [2024-11-20 13:02:53.474670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.965 [2024-11-20 13:02:53.474685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.965 [2024-11-20 13:02:53.474706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.965 [2024-11-20 13:02:53.474727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.965 [2024-11-20 13:02:53.474759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.965 [2024-11-20 13:02:53.474808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.538013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.538132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:28.223 [2024-11-20 13:02:53.538172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.538190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.589184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.589297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:28.223 [2024-11-20 13:02:53.589337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.589354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.589430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.589450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:28.223 [2024-11-20 13:02:53.589466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.589481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.589544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.589563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:28.223 [2024-11-20 13:02:53.589582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.589631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.589761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.589786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:28.223 [2024-11-20 13:02:53.589825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.589843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.589886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.589905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:28.223 [2024-11-20 13:02:53.589920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.589937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.589980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.589999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:28.223 [2024-11-20 13:02:53.590066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.590085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.590138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:28.223 [2024-11-20 13:02:53.590159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:28.223 [2024-11-20 13:02:53.590177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:28.223 [2024-11-20 13:02:53.590191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.223 [2024-11-20 13:02:53.590313] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 211.210 ms, result 0 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:28.791 Remove shared memory files 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83094 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:28.791 13:02:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:28.791 ************************************ 00:30:28.791 END TEST ftl_upgrade_shutdown 00:30:28.791 ************************************ 00:30:28.791 00:30:28.791 real 1m19.444s 00:30:28.791 user 1m48.175s 00:30:28.791 sys 0m19.025s 00:30:28.792 13:02:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.792 13:02:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:29.051 13:02:54 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:30:29.051 13:02:54 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:29.051 13:02:54 ftl -- ftl/ftl.sh@14 -- # killprocess 75126 00:30:29.051 Process with pid 75126 is not found 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 75126 ']' 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@958 -- # kill -0 75126 00:30:29.051 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75126) - No such process 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75126 is not found' 00:30:29.051 13:02:54 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:29.051 13:02:54 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83515 00:30:29.051 13:02:54 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83515 00:30:29.051 13:02:54 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 83515 ']' 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.051 13:02:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:29.051 [2024-11-20 13:02:54.401305] Starting SPDK v25.01-pre git sha1 bc5264bd5 / DPDK 24.03.0 initialization... 00:30:29.051 [2024-11-20 13:02:54.401412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83515 ] 00:30:29.051 [2024-11-20 13:02:54.547312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.309 [2024-11-20 13:02:54.638901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.875 13:02:55 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.875 13:02:55 ftl -- common/autotest_common.sh@868 -- # return 0 00:30:29.875 13:02:55 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:30.134 nvme0n1 00:30:30.134 13:02:55 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:30.134 13:02:55 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:30.134 13:02:55 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:30.134 13:02:55 ftl -- ftl/common.sh@28 -- # stores=60f92bfc-5bb5-4693-b2de-3de8a56539a8 00:30:30.134 13:02:55 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:30.134 13:02:55 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60f92bfc-5bb5-4693-b2de-3de8a56539a8 00:30:30.392 13:02:55 ftl -- ftl/ftl.sh@23 -- # killprocess 83515 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@954 -- # '[' -z 83515 ']' 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@958 -- # kill -0 83515 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@959 -- # uname 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83515 00:30:30.392 killing process with pid 83515 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83515' 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@973 -- # kill 83515 00:30:30.392 13:02:55 ftl -- common/autotest_common.sh@978 -- # wait 83515 00:30:31.769 13:02:57 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:31.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:32.029 Waiting for block devices as requested 00:30:32.029 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.029 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.029 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.290 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:37.581 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:37.581 Remove shared memory files 00:30:37.581 13:03:02 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:37.581 13:03:02 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:37.581 13:03:02 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:37.581 13:03:02 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:37.581 13:03:02 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:37.581 13:03:02 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:37.581 13:03:02 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:37.581 ************************************ 00:30:37.581 END TEST ftl 00:30:37.581 ************************************ 00:30:37.581 00:30:37.581 real 12m28.099s 00:30:37.581 user 14m41.603s 00:30:37.581 sys 1m4.652s 00:30:37.581 13:03:02 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.581 13:03:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:37.581 13:03:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:37.581 13:03:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:37.581 13:03:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:37.581 13:03:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:37.581 13:03:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:37.581 13:03:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:37.581 13:03:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:37.581 13:03:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:30:37.581 13:03:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:30:37.581 13:03:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:30:37.581 13:03:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.581 13:03:02 -- common/autotest_common.sh@10 -- # set +x 00:30:37.581 13:03:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:30:37.581 13:03:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:30:37.581 13:03:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:30:37.581 13:03:02 -- common/autotest_common.sh@10 -- # set +x 00:30:38.967 INFO: APP EXITING 00:30:38.967 INFO: killing all VMs 00:30:38.967 INFO: killing vhost app 00:30:38.967 INFO: EXIT DONE 00:30:39.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:39.490 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:39.490 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:39.490 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:39.752 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:40.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:40.274 Cleaning 00:30:40.274 Removing: /var/run/dpdk/spdk0/config 00:30:40.274 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:40.274 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:40.274 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:40.274 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:40.274 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:40.274 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:40.274 Removing: /var/run/dpdk/spdk0 00:30:40.274 Removing: /var/run/dpdk/spdk_pid56935 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57137 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57344 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57437 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57477 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57599 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57617 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57805 00:30:40.274 Removing: /var/run/dpdk/spdk_pid57904 00:30:40.274 Removing: /var/run/dpdk/spdk_pid58000 00:30:40.274 Removing: /var/run/dpdk/spdk_pid58105 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58197 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58242 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58273 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58349 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58443 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58869 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58933 00:30:40.536 Removing: /var/run/dpdk/spdk_pid58996 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59012 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59133 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59143 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59251 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59267 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59325 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59343 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59396 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59414 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59580 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59615 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59700 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59872 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59956 00:30:40.536 Removing: /var/run/dpdk/spdk_pid59992 00:30:40.536 Removing: /var/run/dpdk/spdk_pid60430 00:30:40.536 Removing: /var/run/dpdk/spdk_pid60523 00:30:40.536 Removing: /var/run/dpdk/spdk_pid60636 00:30:40.536 Removing: /var/run/dpdk/spdk_pid60711 00:30:40.536 Removing: /var/run/dpdk/spdk_pid60742 00:30:40.536 Removing: /var/run/dpdk/spdk_pid60821 00:30:40.536 Removing: /var/run/dpdk/spdk_pid61452 00:30:40.536 Removing: /var/run/dpdk/spdk_pid61494 00:30:40.536 Removing: /var/run/dpdk/spdk_pid61996 00:30:40.536 Removing: /var/run/dpdk/spdk_pid62100 00:30:40.536 Removing: /var/run/dpdk/spdk_pid62215 00:30:40.536 Removing: /var/run/dpdk/spdk_pid62268 00:30:40.536 Removing: /var/run/dpdk/spdk_pid62298 00:30:40.536 Removing: /var/run/dpdk/spdk_pid62319 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64168 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64306 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64310 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64328 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64368 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64372 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64384 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64429 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64433 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64445 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64490 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64494 00:30:40.536 Removing: /var/run/dpdk/spdk_pid64506 00:30:40.536 Removing: /var/run/dpdk/spdk_pid65895 00:30:40.536 Removing: /var/run/dpdk/spdk_pid65992 00:30:40.536 Removing: /var/run/dpdk/spdk_pid67402 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69141 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69215 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69290 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69400 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69486 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69587 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69660 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69731 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69841 00:30:40.536 Removing: /var/run/dpdk/spdk_pid69938 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70028 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70102 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70178 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70282 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70379 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70473 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70547 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70617 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70727 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70823 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70914 00:30:40.536 Removing: /var/run/dpdk/spdk_pid70989 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71059 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71139 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71213 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71316 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71407 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71502 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71576 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71649 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71719 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71793 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71901 00:30:40.536 Removing: /var/run/dpdk/spdk_pid71988 00:30:40.536 Removing: /var/run/dpdk/spdk_pid72132 00:30:40.536 Removing: /var/run/dpdk/spdk_pid72416 00:30:40.536 Removing: /var/run/dpdk/spdk_pid72458 00:30:40.536 Removing: /var/run/dpdk/spdk_pid72890 00:30:40.536 Removing: /var/run/dpdk/spdk_pid73083 00:30:40.536 Removing: /var/run/dpdk/spdk_pid73187 00:30:40.536 Removing: /var/run/dpdk/spdk_pid73291 00:30:40.536 Removing: /var/run/dpdk/spdk_pid73343 00:30:40.536 Removing: /var/run/dpdk/spdk_pid73371 00:30:40.536 Removing: /var/run/dpdk/spdk_pid73667 00:30:40.798 Removing: /var/run/dpdk/spdk_pid73717 00:30:40.798 Removing: /var/run/dpdk/spdk_pid73790 00:30:40.798 Removing: /var/run/dpdk/spdk_pid74175 00:30:40.798 Removing: /var/run/dpdk/spdk_pid74321 00:30:40.798 Removing: /var/run/dpdk/spdk_pid75126 00:30:40.798 Removing: /var/run/dpdk/spdk_pid75248 00:30:40.798 Removing: /var/run/dpdk/spdk_pid75425 00:30:40.798 Removing: /var/run/dpdk/spdk_pid75511 00:30:40.798 Removing: /var/run/dpdk/spdk_pid75803 00:30:40.798 Removing: /var/run/dpdk/spdk_pid76044 00:30:40.798 Removing: /var/run/dpdk/spdk_pid76397 00:30:40.798 Removing: /var/run/dpdk/spdk_pid76579 00:30:40.798 Removing: /var/run/dpdk/spdk_pid76720 00:30:40.798 Removing: /var/run/dpdk/spdk_pid76773 00:30:40.798 Removing: /var/run/dpdk/spdk_pid76939 00:30:40.798 Removing: /var/run/dpdk/spdk_pid76964 00:30:40.798 Removing: /var/run/dpdk/spdk_pid77017 00:30:40.798 Removing: /var/run/dpdk/spdk_pid77260 00:30:40.798 Removing: /var/run/dpdk/spdk_pid77490 00:30:40.798 Removing: /var/run/dpdk/spdk_pid78105 00:30:40.798 Removing: /var/run/dpdk/spdk_pid78790 00:30:40.798 Removing: /var/run/dpdk/spdk_pid79439 00:30:40.798 Removing: /var/run/dpdk/spdk_pid80151 00:30:40.798 Removing: /var/run/dpdk/spdk_pid80307 00:30:40.798 Removing: /var/run/dpdk/spdk_pid80396 00:30:40.798 Removing: /var/run/dpdk/spdk_pid80752 00:30:40.798 Removing: /var/run/dpdk/spdk_pid80807 00:30:40.798 Removing: /var/run/dpdk/spdk_pid81419 00:30:40.798 Removing: /var/run/dpdk/spdk_pid81861 00:30:40.798 Removing: /var/run/dpdk/spdk_pid82581 00:30:40.798 Removing: /var/run/dpdk/spdk_pid82706 00:30:40.798 Removing: /var/run/dpdk/spdk_pid82748 00:30:40.798 Removing: /var/run/dpdk/spdk_pid82805 00:30:40.799 Removing: /var/run/dpdk/spdk_pid82857 00:30:40.799 Removing: /var/run/dpdk/spdk_pid82915 00:30:40.799 Removing: /var/run/dpdk/spdk_pid83094 00:30:40.799 Removing: /var/run/dpdk/spdk_pid83187 00:30:40.799 Removing: /var/run/dpdk/spdk_pid83254 00:30:40.799 Removing: /var/run/dpdk/spdk_pid83310 00:30:40.799 Removing: /var/run/dpdk/spdk_pid83339 00:30:40.799 Removing: /var/run/dpdk/spdk_pid83407 00:30:40.799 Removing: /var/run/dpdk/spdk_pid83515 00:30:40.799 Clean 00:30:40.799 13:03:06 -- common/autotest_common.sh@1453 -- # return 0 00:30:40.799 13:03:06 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:30:40.799 13:03:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.799 13:03:06 -- common/autotest_common.sh@10 -- # set +x 00:30:40.799 13:03:06 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:30:40.799 13:03:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.799 13:03:06 -- common/autotest_common.sh@10 -- # set +x 00:30:40.799 13:03:06 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:41.060 13:03:06 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:41.060 13:03:06 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:41.060 13:03:06 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:30:41.060 13:03:06 -- spdk/autotest.sh@398 -- # hostname 00:30:41.060 13:03:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:41.060 geninfo: WARNING: invalid characters removed from testname! 00:31:07.646 13:03:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:10.193 13:03:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:12.743 13:03:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:15.291 13:03:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:17.906 13:03:42 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:20.456 13:03:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:22.374 13:03:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:22.374 13:03:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:22.374 13:03:47 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:22.374 13:03:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:22.374 13:03:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:22.374 13:03:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:22.374 + [[ -n 5019 ]] 00:31:22.374 + sudo kill 5019 00:31:22.648 [Pipeline] } 00:31:22.660 [Pipeline] // timeout 00:31:22.665 [Pipeline] } 00:31:22.676 [Pipeline] // stage 00:31:22.681 [Pipeline] } 00:31:22.692 [Pipeline] // catchError 00:31:22.700 [Pipeline] stage 00:31:22.702 [Pipeline] { (Stop VM) 00:31:22.713 [Pipeline] sh 00:31:22.998 + vagrant halt 00:31:25.537 ==> default: Halting domain... 00:31:30.843 [Pipeline] sh 00:31:31.126 + vagrant destroy -f 00:31:33.669 ==> default: Removing domain... 00:31:34.255 [Pipeline] sh 00:31:34.540 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:31:34.551 [Pipeline] } 00:31:34.566 [Pipeline] // stage 00:31:34.571 [Pipeline] } 00:31:34.585 [Pipeline] // dir 00:31:34.591 [Pipeline] } 00:31:34.606 [Pipeline] // wrap 00:31:34.613 [Pipeline] } 00:31:34.626 [Pipeline] // catchError 00:31:34.635 [Pipeline] stage 00:31:34.637 [Pipeline] { (Epilogue) 00:31:34.651 [Pipeline] sh 00:31:34.937 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:40.223 [Pipeline] catchError 00:31:40.226 [Pipeline] { 00:31:40.239 [Pipeline] sh 00:31:40.525 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:40.525 Artifacts sizes are good 00:31:40.536 [Pipeline] } 00:31:40.550 [Pipeline] // catchError 00:31:40.561 [Pipeline] archiveArtifacts 00:31:40.568 Archiving artifacts 00:31:40.719 [Pipeline] cleanWs 00:31:40.738 [WS-CLEANUP] Deleting project workspace... 00:31:40.738 [WS-CLEANUP] Deferred wipeout is used... 00:31:40.762 [WS-CLEANUP] done 00:31:40.764 [Pipeline] } 00:31:40.779 [Pipeline] // stage 00:31:40.784 [Pipeline] } 00:31:40.797 [Pipeline] // node 00:31:40.803 [Pipeline] End of Pipeline 00:31:40.840 Finished: SUCCESS